Skip to main content

Machine Learning for Cyber-Physical Systems

  • Chapter
  • First Online:
Digital Transformation

Abstract

Machine Learning plays a crucial role for many innovations for Cyber-Physical Systems such as production systems. On the one hand, this is due to the availability of more and more data in ever better quality. On the other hand, the demands on the systems are also increasing: Production systems have to support more and more product variants, saving resources is increasingly in focus and international competition is forcing companies to innovate faster. Machine Learning leverages data to solve these issues. The goal is to have self-learning systems which improve over time. There are various algorithms and methods for this, for which an overview is given here. Furthermore, this article discusses special requirements of Cyber-Physical Systems for Machine Learning processes.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Aghabozorgi, S., Seyed Shirkhorshidi, A., Ying Wah, T.: Time-series clustering – a decade review. Inf. Syst. 53, 16–38 (Oct 2015)

    Google Scholar 

  2. Alazab, M., Khan, S., Krishnan, S.S.R., Pham, Q.V., Reddy, M.P.K., Gadekallu, T.R.: A multidirectional lstm model for predicting the stability of a smart grid. IEEE Access 8, 85454–85463 (2020). 10.1109/access.2020.2991067

    Article  Google Scholar 

  3. Anumasa, S., Srijith, P.K.: Improving robustness and uncertainty modelling in neural ordinary differential equations. In: 2021 IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 4052–4060 (2021). 10.1109/WACV48630.2021.00410

    Google Scholar 

  4. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate, https://arxiv.org/pdf/1409.0473

  5. Battaglia, P.W., Hamrick, J.B., Bapst, V., Sanchez-Gonzalez, A., Zambaldi, V., Malinowski, M., Tacchetti, A., Raposo, D., Santoro, A., Faulkner, R., Gulcehre, C., Song, F., Ballard, A., Gilmer, J., Dahl, G., Vaswani, A., Allen, K., Nash, C., Langston, V., Dyer, C., Heess, N., Wierstra, D., Kohli, P., Botvinick, M., Vinyals, O., Li, Y., Pascanu, R.: Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261 (Jun 2018)

  6. Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (Aug 2013)

    Google Scholar 

  7. Bengio, Y., LeCun, Y., Hinton, G.: Deep learning for ai. Communications of the ACM 64(7), 58–65 (2021). 10.1145/3448250

    Article  Google Scholar 

  8. Beyerer, J., Kühnert, C., Niggemann, O.: Machine Learning for Cyber Physical Systems – Selected papers from the International Conference ML4CPS 2018. Springer (2019)

    Google Scholar 

  9. Beyerer, J., Maier, A., Niggemann, O.: Machine Learning for Cyber Physical Systems – Selected papers from the International Conference ML4CPS 2017. Springer (2020)

    Google Scholar 

  10. Beyerer, J., Maier, A., Niggemann, O.: Machine Learning for Cyber Physical Systems – Selected papers from the International Conference ML4CPS 2020. Springer (2021)

    Google Scholar 

  11. Box, G.E.P., Jenkins, G.M., Reinsel, G.C., Ljung, G.M.: Time Series Analysis: Forecasting and Control. John Wiley & Sons, Inc., Hoboken, New Jersey, USA (2015)

    MATH  Google Scholar 

  12. Breiman, L.: Random forests. In: Machine Learning (2001). 10.1023/A:1010933404324

    Article  MATH  Google Scholar 

  13. Bronstein, M.M., Bruna, J., Cohen, T., Veličković, P.: Geometric deep learning: Grids, groups, graphs, geodesics, and gauges. arXiv preprint arXiv:2104.13478 (Apr 2021)

  14. Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., Amodei, D.: Language models are few-shot learners. Advances in Neural Information Processing Systems 33, 1877–1901 (2020)

    Google Scholar 

  15. Bunte, A., Stein, Benno an d Niggemann, O.: Model-based diagnosis for cyber-physical production systems based on machine learning and residual-based diagnosis models. Hawaii, USA (2019)

    Google Scholar 

  16. Burrows, S., Frochte, J., Völske, M., Torres, A.B.M., Stein, B.: Learning overlap optimization for domain decomposition methods. In: Advances in Knowledge Discovery and Data Mining. pp. 438–449. Springer Berlin Heidelberg (2013)

    Google Scholar 

  17. Burrows, S., Stein, B., Frochte, J., Wiesner, D., Müller, K.: Simulation data mining for supporting bridge design. In: Proceedings of the Ninth Australasian Data Mining Conference-Volume 121. pp. 163–170 (2011)

    Google Scholar 

  18. Chen, R.T.Q., Amos, B., Nickel, M.: Learning neural event functions for ordinary differential equations. ICLR https://arxiv.org/pdf/2011.03902

  19. Chen, R.T.Q., Li, X., Grosse, R., Duvenaud, D.: Isolating sources of disentanglement in variational autoencoders (Feb 2018)

    Google Scholar 

  20. Chen, R.T.Q., Rubanova, Y., Bettencourt, J., Duvenaud, D.: Neural ordinary differential equations, https://arxiv.org/pdf/1806.07366

  21. Chen, T., Kornblith, S., Swersky, K., Norouzi, M., Hinton, G.: Big Self-Supervised models are strong Semi-Supervised learners (Jun 2020)

    Google Scholar 

  22. Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., Abbeel, P.: InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets (Jun 2016)

    Google Scholar 

  23. Chiu, M.C., Tsai, C.D., Li, T.L.: An integrative machine learning method to improve fault detection and productivity performance in a cyber-physical system. Journal of Computing and Information Science in Engineering 20(2) (2020). 10.1115/1.4045663, https://asmedigitalcollection.asme.org/computingengineering/article/20/2/021009/1071865

  24. Cho, K., van Merrienboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., Bengio, Y.: Learning phrase representations using RNN Encoder-Decoder for statistical machine translation (Jun 2014)

    Google Scholar 

  25. Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling http://arxiv.org/pdf/1412.3555v1

  26. Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116 (Nov 2019)

  27. Cosmo, L., Kazi, A., Ahmadi, S.A., Navab, N., Bronstein, M.: Latent-Graph learning for disease prediction. In: Medical Image Computing and Computer Assisted Intervention – MICCAI 2020. pp. 643–653. Springer International Publishing (2020)

    Google Scholar 

  28. Daniluk, M., Rocktäschel, T., Welbl, J., Riedel, S.: Frustratingly short attention spans in neural language modeling, https://arxiv.org/pdf/1702.04521

  29. Daumé III, H.: Frustratingly easy domain adaptation. arXiv preprint arXiv:0907.1815 (Jul 2009)

  30. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (Oct 2018)

  31. Diedrich, A., Niggemann, O.: Model-based diagnosis of hybrid systems using satisfiability modulo theory. Hawaii, USA (2019)

    Book  Google Scholar 

  32. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., Houlsby, N.: An image is worth 16x16 words: Transformers for image recognition at scale, https://arxiv.org/pdf/2010.11929

  33. Dupont, E., Doucet, A., Teh, Y.W.: Augmented neural odes, https://arxiv.org/pdf/1904.01681

  34. Durbin, J., Koopman, S.J.: Time series analysis by state space methods, Oxford statistical science series, vol. 38. Oxford Univ. Press, Oxford, 2. ed. edn. (2012). 10.1093/acprof:oso/9780199641178.001.0001

    Google Scholar 

  35. Eiteneuer, B., Hranisavljevic, N., Niggemann, O.: Dimensionality reduction and anomaly detection for cpps data using autoencoder. In: 20th IEEE International Conference on Industrial Technology (ICIT). IEEE, Melbourne, Australien (Feb 2019)

    Google Scholar 

  36. Eiteneuer, B., Niggemann, O.: Lstm for model-based anomaly detection in cyber-physical systems. In: Proceedings of the 29th International Workshop on Principles of Diagnosis. Warsaw, Poland (Aug 2018)

    Google Scholar 

  37. Esling, P., Agon, C.: Time-series data mining. ACM Comput. Surv. 45(1), 1–34 (Dec 2012)

    Google Scholar 

  38. Esteva, A., Kuprel, B., Novoa, R.A., Ko, J., Swetter, S.M., Blau, H.M., Thrun, S.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115–118 (Feb 2017)

    Google Scholar 

  39. Fei-Fei, L., Fergus, R., Perona, P.: One-shot learning of object categories. IEEE Trans. Pattern Anal. Mach. Intell. 28(4), 594–611 (Apr 2006)

    Google Scholar 

  40. Fei-Niu, Y., Lin, Z., Jin-Ting, S., Xue, X., Gang, L.: Theories and applications of auto-encoder neural networks: A literature survey. Chinese Journal of Computers (2019)

    Google Scholar 

  41. Fischer, A., Igel, C.: An introduction to restricted Boltzmann machines. In: Lecture Notes in Computer Science. vol. 7441 LNCS, pp. 14–36. Springer, Berlin, Heidelberg (2012)

    Google Scholar 

  42. Fontes, C.H., Pereira, O.: Pattern recognition in multivariate time series – a case study applied to fault detection in a gas turbine. Eng. Appl. Artif. Intell. 49, 10–18 (Mar 2016)

    Google Scholar 

  43. Gal, Y., Ghahramani, Z.: Dropout as a bayesian approximation: Representing model uncertainty in deep learning. International Conference on Machine Learning pp. 1050–1059 (2016), http://proceedings.mlr.press/v48/gal16.html

  44. Gawlikowski, J., Tassi, C.R.N., Ali, M., Lee, J., Humt, M., Feng, J., Kruspe, A., Triebel, R., Jung, P., Roscher, R., Shahzad, M., Yang, W., Bamler, R., Zhu, X.X.: A survey of uncertainty in deep neural networks https://arxiv.org/pdf/2107.03342

  45. Gilmer, J., Schoenholz, S.S., Riley, P.F., Vinyals, O., Dahl, G.E.: Neural message passing for quantum chemistry. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 70, pp. 1263–1272. PMLR (2017)

    Google Scholar 

  46. Gligorijevic, V., Renfrew, P.D., Kosciolek, T., Leman, J.K., others: Structure-based function prediction using graph convolutional networks. bioRxiv (2020)

    Google Scholar 

  47. Goh, J., Adepu, S., Tan, M., Lee, Z.S.: Anomaly detection in cyber physical systems using recurrent neural networks. In: IEEE 18th International Symposium on High Assurance Systems Engineering. IEEE, Piscataway, NJ (2017). 10.1109/hase.2017.36

    Google Scholar 

  48. Goodfellow, I., Bengio, Y., Courville, A., Bengio, Y.: Deep learning, vol. 1. MIT press Cambridge (2016)

    Google Scholar 

  49. Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks (Jun 2014)

    Google Scholar 

  50. Grill, J.B., Strub, F., Altché, F., Tallec, C., Richemond, P.H., Buchatskaya, E., Doersch, C., Pires, B.A., Guo, Z.D., Azar, M.G., et al.: Bootstrap your own latent: A new approach to self-supervised learning. arXiv preprint arXiv:2006.07733 (2020)

  51. Guo, G., Lu, Z., Han, Q.L.: Control with markov sensors/actuators assignment. IEEE Transactions on Automatic Control 57(7), 1799–1804 (2012). 10.1109/TAC.2011.2176393

    Article  MATH  Google Scholar 

  52. Higgins, I., Matthey, L., Pal, A., Burgess, C., Glorot, X., Botvinick, M., Mohamed, S., Lerchner, A.: beta-vae: Learning basic visual concepts with a constrained variational framework. In: ICLR (2017)

    Google Scholar 

  53. Hinton, G., Roweis, S.T.: Stochastic neighbor embedding. In: NIPS. vol. 15, pp. 833–840 (2002)

    Google Scholar 

  54. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural computation 9(8), 1735–1780 (1997). 10.1162/neco.1997.9.8.1735

    Article  Google Scholar 

  55. Hochreiter, S., Bengio, Y., Paolo, F., Schmidhuber, J.: Gradient flow in recurrent nets: The difficulty of learning longterm dependencies. In: Kolen, J.F., Kremer, S.C. (eds.) A field guide to dynamical recurrent networks. IEEE Press and IEEE Xplore, New York and Piscataway, New Jersey (2009). 10.1109/9780470544037.ch14

    Google Scholar 

  56. Hotelling, H.: Analysis of a complex of statistical variables into principal components. J. Educ. Psychol. 24(6), 417–441 (Sep 1933)

    Google Scholar 

  57. Hranisavljevic, N., Maier, A., Niggemann, O.: Discretization of hybrid CPPS data into timed automaton using restricted boltzmann machines. Eng. Appl. Artif. Intell. 95, 103826 (Oct 2020)

    Google Scholar 

  58. Hu, W., Fey, M., Zitnik, M., Dong, Y., Ren, H., Liu, B., Catasta, M., Leskovec, J.: Open graph benchmark: Datasets for machine learning on graphs. arXiv preprint arXiv:2005.00687 (May 2020)

  59. Hyndman, R.J., Koehler, A.B., Ord, J.K., Snyder, R.D.: Forecasting with Exponential Smoothing: The State Space Approach. Springer Berlin Heidelberg (2008)

    Book  MATH  Google Scholar 

  60. Iten, R., Metger, T., Wilming, H., Del Rio, L., Renner, R.: Discovering physical concepts with neural networks. Phys. Rev. Lett. 124(1), 010508 (Jan 2020)

    Google Scholar 

  61. Jing, L., Tian, Y.: Self-supervised visual feature learning with deep neural networks: A survey. IEEE transactions on pattern analysis and machine intelligence (2020)

    Google Scholar 

  62. Kennedy, M.C., O’Hagan, A.: Bayesian calibration of computer models. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 63(3), 425–464 (2001). 10.1111/1467-9868.00294

    Article  MATH  Google Scholar 

  63. Kim, H., Mnih, A.: Disentangling by factorising (Feb 2018)

    Google Scholar 

  64. Kingma, D.P., Welling, M.: Auto-Encoding variational bayes (Dec 2013)

    Google Scholar 

  65. Kipf, T.N., Welling, M.: Semi-Supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (Sep 2016)

  66. Kiss, I., Genge, B., Haller, P., Sebestyén, G.: Data clustering-based anomaly detection in industrial control systems. In: 2014 IEEE 10th International Conference on Intelligent Computer Communication and Processing (ICCP). pp. 275–281 (Sep 2014)

    Google Scholar 

  67. Kiureghian, A.D., Ditlevsen, O.: Aleatory or epistemic? does it matter? Structural Safety 31(2), 105–112 (2009). 10.1016/j.strusafe.2008.06.020

    Article  Google Scholar 

  68. Kruskal, J.B.: Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika 29(1), 1–27 (Mar 1964)

    Google Scholar 

  69. Lake, B.M., Ullman, T.D., Tenenbaum, J.B., Gershman, S.J.: Building machines that learn and think like people (2017)

    Google Scholar 

  70. Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. undefined (2017), https://www.semanticscholar.org/paper/Simple-and-Scalable-Predictive-Uncertainty-using-Lakshminarayanan-Pritzel/802168a81571dde28f5ddb94d84677bc007afa7b

  71. Le Paine, T., Khorrami, P., Han, W., Huang, T.S.: An analysis of unsupervised pre-training in light of recent advances (Dec 2014)

    Google Scholar 

  72. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (May 2015)

    Google Scholar 

  73. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  74. Lee, E.A.: Cps foundations. In: Proceedings of the 47th Design Automation Conference. DAC ’10, ACM, New York, NY, USA (2010)

    Google Scholar 

  75. Legrand, A., Trannois, H., Cournier, A.: Use of uncertainty with autoencoder neural networks for anomaly detection. In: IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering. pp. 32–35. Conference Publishing Services, IEEE Computer Society, Los Alamitos, California and Washington and Tokyo (2019). 10.1109/AIKE.2019.00014

    Google Scholar 

  76. Li, D., Chen, D., Jin, B., Shi, L., Goh, J., Ng, S.K.: MAD-GAN: Multivariate anomaly detection for time series data with generative adversarial networks. In: Artificial Neural Networks and Machine Learning – ICANN 2019: Text and Time Series. pp. 703–716. Springer International Publishing (2019)

    Google Scholar 

  77. Li, P., Niggemann, O.: Improving clustering based anomaly detection with concave hull: An application in condition monitoring of wind turbines. In: 14th IEEE International Conference on Industrial Informatics (INDIN 2016). Poltiers (France) (2016)

    Google Scholar 

  78. Li, S., Jin, X., Xuan, Y., Zhou, X., Chen, W., Wang, Y.X., Yan, X.: Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. Advances in Neural Information Processing Systems 32 (2019)

    Google Scholar 

  79. Lin, Z., Feng, M., Santos, C.N.d., Yu, M., Xiang, B., Zhou, B., Bengio, Y.: A structured self-attentive sentence embedding, https://arxiv.org/pdf/1703.03130

  80. Liu, M., Ren, S., Ma, S., Jiao, J., Chen, Y., Wang, Z., Song, W.: Gated transformer networks for multivariate time series classification, https://arxiv.org/pdf/2103.14438

  81. Loucks, D.P., van Beek, E., Loucks, D.P., van Beek, E.: System Sensitivity and Uncertainty Analysis. In: Water Resource Systems Planning and Management, pp. 331–374. Springer International Publishing (2017)

    Google Scholar 

  82. van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(86), 2579–2605 (2008)

    MATH  Google Scholar 

  83. Maier, A., Schriegel, S., Niggemann, O.: Big data and machine learning for the smart factory - solutions for condition monitoring, diagnosis and optimization. Industrial Internet of Things: Cybermanufacturing Systems (2016)

    Google Scholar 

  84. Malhotra, P., Vishnu, T.V., Vig, L., Agarwal, P., Shroff, G.: TimeNet: Pre-trained deep recurrent neural network for time series classification (Jun 2017)

    Google Scholar 

  85. Mallidi, S.H., Ogawa, T., Hermansky, H.: Uncertainty estimation of DNN classifiers. In: 2015 IEEE Workshop on Automatic Speech Recognition and Understanding, ASRU 2015 - Proc. pp. 283–288. I. of Electrical and Electronics Engineers Inc. (2 2016). 10.1109/ASRU.2015.7404806

    Google Scholar 

  86. Massaroli, S., Poli, M., Park, J., Yamashita, A., Asama, H.: Dissecting neural odes. In: H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, H. Lin (eds.) Advances in Neural Information Processing Systems. vol. 33, pp. 3952–3963. Curran Associates, Inc (2020), https://proceedings.neurips.cc/paper/2020/file/293835c2cc75b585649498ee74b395f5-Paper.pdf

  87. McInnes, L., Healy, J., Melville, J.: UMAP: Uniform manifold approximation and projection for dimension reduction (Feb 2018)

    Google Scholar 

  88. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space (Jan 2013)

    Google Scholar 

  89. Monti, F., Frasca, F., Eynard, D., Mannion, D., Bronstein, M.M.: Fake news detection on social media using geometric deep learning. arXiv preprint arXiv:1902.06673 (Feb 2019)

  90. Multaheb, S., Zimmering, B., Niggemann, O.: Expressing uncertainty in neural networks for production systems. at - Automatisierungstechnik 63(3), 221–230 (2021)

    Google Scholar 

  91. Murphy, K.: Machine Learning: A Probabilistic Perspective. MIT Press, Cambridge, Massachusetts, USA (2012)

    MATH  Google Scholar 

  92. Na, S., Xumin, L., Yong, G.: Research on k-means clustering algorithm: An improved k-means clustering algorithm. In: 2010 Third International Symposium on Intelligent Information Technology and Security Informatics. pp. 63–67 (2010). 10.1109/IITSI.2010.74

    Google Scholar 

  93. Nautrup, H.P., Metger, T., Iten, R., Jerbi, S., Trenkwalder, L.M., Wilming, H., Briegel, H.J., Renner, R.: Operationally meaningful representations of physical systems in neural networks (Jan 2020)

    Google Scholar 

  94. Niggemann, O., Diedrich, A., Pfannstiel, E., Schraven, J., Kühnert, C.: A generic digitaltwin model for artificial intelligence applications. In: IEEE International Conference on Industrial Cyber-Physical Systems (ICPS) (2021)

    Google Scholar 

  95. Niggemann, O., Frey, C.: Data-driven anomaly detection in cyber-physical production systems. at - Automatisierungstechnik(63) 63, 821–832 (2016)

    Google Scholar 

  96. Niggemann, O., Lohweg, V.: On the diagnosis of cyber-physical production systems - state-of-the-art and research agenda. Austin, Texas, USA (2015)

    Google Scholar 

  97. Niggemann, O., Schüller, P.: IMPROVE - Innovative Modelling Approaches for Production Systems to Raise Validatable Efficiency. Springer Vieweg (2018)

    Google Scholar 

  98. Nix, D.A., Weigend, A.S.: Estimating the mean and variance of the target probability distribution. In: The 1994 IEEE International Conference on Neural Networks. pp. 55–60 vol.1. IEEE Neural Networks Council, New York and Piscataway, NJ (1994). 10.1109/ICNN.1994.374138

    Google Scholar 

  99. Otto, J., Vogel-Heuser, B., Niggemann, O.: Automatic parameter estimation for reusable software components of modular and reconfigurable cyber physical production systems in the domain of discrete manufacturing. IEEE Transactions on Industrial Informatics (2018)

    Google Scholar 

  100. Pascanu, R., Mikolov, T., Bengio, Y.: On the difficulty of training recurrent neural networks. In: Dasgupta, S., McAllester, D. (eds.) Proceedings of the 30th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 28, pp. 1310–1318. PMLR, Atlanta, Georgia, USA (17–19 Jun 2013), http://proceedings.mlr.press/v28/pascanu13.html

  101. Pineau, E., Razakarivony, S., Bonald, T.: Unsupervised ageing detection of mechanical systems on a causality graph. In: ICMLA (2020)

    Google Scholar 

  102. Rangapuram, S.S., Seeger, M.W., Gasthaus, J., Stella, L., Wang, Y., Januschowski, T.: Deep state space models for time series forecasting. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems. vol. 31. Curran Associates, Inc. (2018), https://proceedings.neurips.cc/paper/2018/file/5cf68969fb67aa6082363a6d4e6468e2-Paper.pdf

  103. Rossi, R., Ahmed, N.: The network data repository with interactive graph analytics and visualization. AAAI 29(1) (Mar 2015)

    Google Scholar 

  104. Rubanova, Y., Chen, R.T.Q., Duvenaud, D.: Latent ordinary differential equations for irregularly-sampled time series (2019), https://openreview.net/forum?id=HygCYNSlLB

  105. Safavian, S., Landgrebe, D.: A survey of decision tree classifier methodology. IEEE Transactions on Systems, Man, and Cybernetics 21(3), 660–674 (1991). 10.1109/21.97458

    Article  Google Scholar 

  106. Schmidt, T., Hauer, F., Pretschner, A.: Automated anomaly detection in CPS log files. In: Computer Safety, Reliability, and Security. pp. 179–194. Springer International Publishing (2020)

    Google Scholar 

  107. Schubert, E., Sander, J., Ester, M., Kriegel, H.P., Xu, X.: Dbscan revisited, revisited: Why and how you should (still) use dbscan. ACM Trans. Database Syst. 42(3) (Jul 2017). 10.1145/3068335, https://doi.org/10.1145/3068335

  108. Sen, P., Namata, G., Bilgic, M., Getoor, L., Galligher, B., Eliassi-Rad, T.: Collective classification in network data. AIMag 29(3), 93–93 (Sep 2008)

    Google Scholar 

  109. Shang, C., Chen, J., Bi, J.: Discrete graph structure learning for forecasting multiple time series. arXiv preprint arXiv:2101.06861 (2021)

  110. Sherstinsky, A.: Fundamentals of recurrent neural network (rnn) and long short-term memory (lstm) network. Physica D: Nonlinear Phenomena 404(8), 132306 (2020). 10.1016/j.physd.2019.132306, https://arxiv.org/pdf/1808.03314

  111. Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., Hassabis, D.: Mastering chess and shogi by Self-Play with a general reinforcement learning algorithm (Dec 2017)

    Google Scholar 

  112. Smolensky, P.: Information Processing in Dynamical Systems: Foundations of Harmony Theory, p. 194-281. MIT Press, Cambridge, MA, USA (1986)

    Google Scholar 

  113. Socher, R., Ganjoo, M., Sridhar, H., Bastani, O., Manning, C.D., Ng, A.Y.: Zero-Shot learning through Cross-Modal transfer (Jan 2013)

    Google Scholar 

  114. Sun, X., Bischl, B.: Tutorial and survey on probabilistic graphical model and variational inference in deep reinforcement learning. In: 2019 IEEE Symposium Series on Computational Intelligence (SSCI). pp. 110–119 (2019). 10.1109/SSCI44817.2019.9003114

    Google Scholar 

  115. Suter, R., Miladinovic, D., Schölkopf, B., Bauer, S.: Robustly disentangled causal mechanisms: Validating deep representations for interventional robustness. In: Chaudhuri, K., Salakhutdinov, R. (eds.) Proceedings of the 36th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 97, pp. 6056–6065. PMLR (2019)

    Google Scholar 

  116. Talkhestani, B.A., Jung, T., Lindemann, B., Sahlab, N., Jazdi, N., Schloegl, W., Weyrich, M.: An architecture of an intelligent digital twin in a cyber-physical production system:. at - Automatisierungstechnik 67(9), 762–782 (2019). https://doi.org/10.1515/auto-2019-0039

  117. Tan, P.N., Steinbach, M., Karpatne, A., Kumar, V.: Introduction to Data Mining, 2nd Edition. Pearson Education, New York, NY, USA (2019)

    Google Scholar 

  118. Thakoor, S., Tallec, C., Azar, M.G., Munos, R., Veličković, P., Valko, M.: Bootstrapped representation learning on graphs. arXiv preprint arXiv:2102.06514 (2021)

  119. Tzeng, E., Hoffman, J., Saenko, K., Darrell, T.: Adversarial discriminative domain adaptation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 7167–7176. openaccess.thecvf.com (2017)

    Google Scholar 

  120. University, S.: Artificial Intelligence Index Report 2021. HAI Human-centered Artificial Intelligence (2021)

    Google Scholar 

  121. Uusitalo, L., Lehikoinen, A., Helle, I., Myrberg, K.: An overview of methods to evaluate uncertainty of deterministic models in decision support (jan 2015). 10.1016/j.envsoft.2014.09.017

    Google Scholar 

  122. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in Neural Information Processing Systems 30 (2017)

    Google Scholar 

  123. Veličković, P., Buesing, L., Overlan, M.C., Pascanu, R., Vinyals, O., Blundell, C.: Pointer graph networks. arXiv preprint arXiv:2006.06380 (2020)

  124. Veličković, P., Cucurull, G., Casanova, A., Romero, A., Liò, P., Bengio, Y.: Graph attention networks. arXiv preprint arXiv:1710.10903 (Oct 2017)

  125. Wang, J.M., Fleet, D.J., Hertzmann, A.: Gaussian process dynamical models. In: Proceedings of the 18th International Conference on Neural Information Processing Systems. p. 1441-1448. NIPS’05, MIT Press, Cambridge, MA, USA (2005)

    Google Scholar 

  126. Windmann, S., Niggemann, O., Stichweh, H.: Energy efficiency optimization by automatic coordination of motor speeds in conveying systems (2015)

    Google Scholar 

  127. Xing, Z., Pei, J., Keogh, E.: A brief survey on sequence classification. SIGKDD Explor. Newsl. 12(1), 40–48 (Nov 2010)

    Google Scholar 

  128. Yan, H., Jiawei, D., Tan, V.Y.F., Feng, J.: On robustness of neural ordinary differential equations. In: 2020 International Conference on Learning Representations (2020), https://arxiv.org/pdf/1910.05513

  129. Yang, Y., Sautière, G., Ryu, J.J., Cohen, T.S.: Feedback recurrent autoencoder. In: ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). pp. 3347–3351 (2020). 10.1109/ICASSP40776.2020.9054074

    Google Scholar 

  130. Zerveas, G., Jayaraman, S., Patel, D., Bhamidipaty, A., Eickhoff, C.: A transformer-based framework for multivariate time series representation learning. In: Zhu, F., Chin Ooi, B., Miao, C. (eds.) Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. pp. 2114–2124. ACM, New York, NY, USA (08142021). 10.1145/3447548.3467401

    Google Scholar 

  131. Zhang, F., Pinkal, K., Wefing, P., Conradi, F., Schneider, J., Niggemann, O.: Quality control of continuous wort production through production data analysis in latent space (2019)

    Google Scholar 

  132. Zhang, J.X., Ling, Z.H., Liu, L.J., Jiang, Y., Dai, L.R.: Sequence-to-Sequence acoustic modeling for voice conversion. IEEE/ACM Transactions on Audio, Speech, and Language Processing 27(3), 631–644 (Mar 2019)

    Google Scholar 

  133. Zhao, S., Song, J., Ermon, S.: InfoVAE: Balancing learning and inference in variational autoencoders. AAAI 33(01), 5885–5892 (Jul 2019)

    Google Scholar 

  134. Zhou, H., Zhang, S., Peng, J., Zhang, S., Li, J., Xiong, H., Zhang, W.: Informer: Beyond efficient transformer for long sequence time-series forecasting, https://arxiv.org/pdf/2012.07436

  135. Zimmering, B., Niggemann, O., Hasterok, C., Pfannstiel, E., Ramming, D., Pfrommer, J.: Generating artificial sensor data for the comparison of unsupervised machine learning methods. Sensors 21(7) (2021). 10.3390/s21072397, https://www.mdpi.com/1424-8220/21/7/2397

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Oliver Niggemann .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer-Verlag GmbH, DE, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Niggemann, O., Zimmering, B., Steude, H., Augustin, J.L., Windmann, A., Multaheb, S. (2023). Machine Learning for Cyber-Physical Systems. In: Vogel-Heuser, B., Wimmer, M. (eds) Digital Transformation. Springer Vieweg, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-65004-2_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-662-65004-2_17

  • Published:

  • Publisher Name: Springer Vieweg, Berlin, Heidelberg

  • Print ISBN: 978-3-662-65003-5

  • Online ISBN: 978-3-662-65004-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics