Skip to main content
Log in

A structure-preserving neural differential operator with embedded Hamiltonian constraints for modeling structural dynamics

  • Original Paper
  • Published:
Computational Mechanics Aims and scope Submit manuscript

Abstract

Data-driven machine learning models are useful for modeling complex structures based on empirical observations, bypassing the need to generate a physical model where the physics is not well known or otherwise difficult to model. One disadvantage of purely data-driven approaches is that they tend to perform poorly in regions outside the original training domain. To mitigate this limitation, physical knowledge about the structure can be embedded in the neural network architecture. For large-scale systems, relevant physical properties such as the system state matrices may be expensive to compute. One way around this problem is to use scalar functionals, such as energy, to constrain the network to operate within physical bounds. We propose a neural network framework based on Hamiltonian mechanics to enforce a physics-informed structure to the model. The Hamiltonian framework allows us to relate the energy of the system to the measured quantities through the Euler–Lagrange equations of motion. In this work, the potential, kinetic energy, and Rayleigh damping terms are each modeled with a multilayer perceptron. Auto-differentiation is used to compute partial derivatives and assemble the relevant equations. The network incorporates a numerics-informed loss function via the residual of a multi-step integration term for deployment as a neural differential operator. Our approach incorporates a physics-constrained autoencoder to perform coordinate transformation between measured and generalized coordinates. This approach results in a physics-informed, structure-preserving model of the structure that can form the basis of a digital twin for many applications. The technique is demonstrated on computational examples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

References

  1. Kerschen G, Golinval J-C, Vakakis AF, Bergman LA (2005) The method of proper orthogonal decomposition for dynamical characterization and order reduction of mechanical systems: an overview. Nonlinear Dyn 41(1):147–169. https://doi.org/10.1007/s11071-005-2803-2

    Article  MathSciNet  MATH  Google Scholar 

  2. Benner P, Gugercin S, Willcox K (2015) A survey of projection-based model reduction methods for parametric dynamical systems. SIAM Rev 57(4):483–531. https://doi.org/10.1137/130932715

    Article  MathSciNet  MATH  Google Scholar 

  3. Benner P, Ohlberger M, Cohen A, Willcox K (2017) Model reduction and approximation: theory and algorithms. Computational Science and Engineering. Society for Industrial and Applied Mathematics, Philadelphia, PA. https://doi.org/10.1137/1.9781611974829

  4. Mooers G, Pritchard M, Beucler T, Ott J, Yacalis G, Baldi P, Gentine P (2021) Assessing the potential of deep learning for emulating cloud superparameterization in climate models with real-geography boundary conditions. J Adv Model Earth Syst 13(5):2020–002385. https://doi.org/10.1029/2020MS002385

    Article  Google Scholar 

  5. Krasnopolsky VM, Fox-Rabinovitz MS (2006) Complex hybrid models combining deterministic and machine learning components for numerical climate modeling and weather prediction. Neural Netw 19(2):122–134. https://doi.org/10.1016/j.neunet.2006.01.002. (Earth Sciences and Environmental Applications of Computational Intelligence)

    Article  Google Scholar 

  6. Wang K, Sun W (2018) A multiscale multi-permeability poroplasticity model linked by recursive homogenizations and deep learning. Comput Methods Appl Mech Eng 334:337–380. https://doi.org/10.1016/j.cma.2018.01.036

    Article  MathSciNet  MATH  Google Scholar 

  7. Xie X, Bennett J, Saha S, Lu Y, Cao J, Liu WK, Gan Z (2021) Mechanistic data-driven prediction of as-built mechanical properties in metal additive manufacturing. NPJ Comput Mater 7(1):1–12. https://doi.org/10.1038/s41524-021-00555-z

    Article  Google Scholar 

  8. Kats D, Wang Z, Gan Z, Liu WK, Wagner GJ, Lian Y (2022) A physics-informed machine learning method for predicting grain structure characteristics in directed energy deposition. Comput Mater Sci 202:110958. https://doi.org/10.1016/j.commatsci.2021.110958

    Article  Google Scholar 

  9. Stoffel M, Bamer F, Markert B (2020) Deep convolutional neural networks in structural dynamics under consideration of viscoplastic material behaviour. Mech Res Commun 108:103565. https://doi.org/10.1016/j.mechrescom.2020.103565

    Article  Google Scholar 

  10. Wu R-T, Jahanshahi MR (2019) Deep convolutional neural network for structural dynamic response estimation and system identification. J Eng Mech. https://doi.org/10.1061/(ASCE)EM.1943-7889.0001556

    Article  Google Scholar 

  11. Raissi M, Perdikaris P, Karniadakis GE (2019) Physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J Comput Phys 378:686–707. https://doi.org/10.1016/j.jcp.2018.10.045

    Article  MathSciNet  MATH  Google Scholar 

  12. Kharazmi E, Zhang Z, Karniadakis GEM (2021) hp-vpinns: variational physics-informed neural networks with domain decomposition. Comput Methods Appl Mech Eng 374:113547. https://doi.org/10.1016/j.cma.2020.113547

    Article  MathSciNet  MATH  Google Scholar 

  13. Yang L, Meng X, Karniadakis GE (2021) B-pinns: Bayesian physics-informed neural networks for forward and inverse pde problems with noisy data. J Comput Phys 425:109913. https://doi.org/10.1016/j.jcp.2020.109913

    Article  MathSciNet  MATH  Google Scholar 

  14. Jin X, Cai S, Li H, Karniadakis GE (2021) Nsfnets (Navier–Stokes flow nets): physics-informed neural networks for the incompressible Navier–Stokes equations. J Comput Phys 426:109951. https://doi.org/10.1016/j.jcp.2020.109951

    Article  MathSciNet  MATH  Google Scholar 

  15. Zheng Q, Zeng L, Karniadakis GE (2020) Physics-informed semantic inpainting: application to geostatistical modeling. J Comput Phys 419:109676. https://doi.org/10.1016/j.jcp.2020.109676

    Article  MathSciNet  MATH  Google Scholar 

  16. Zhang E, Dao M, Karniadakis GE, Suresh S (2022) Analyses of internal structures and defects in materials using physics-informed neural networks. Sci Adv 8(7):0644. https://doi.org/10.1126/sciadv.abk0644

    Article  Google Scholar 

  17. Brink AR, Najera-Flores DA, Martinez C (2021) The neural network collocation method for solving partial differential equations. Neural Comput Appl 33(11):5591–5608. https://doi.org/10.1007/s00521-020-05340-5

    Article  Google Scholar 

  18. Sukumar N, Srivastava A (2022) Exact imposition of boundary conditions with distance functions in physics-informed deep neural networks. Comput Methods Appl Mech Eng 389:114333. https://doi.org/10.1016/j.cma.2021.114333

    Article  MathSciNet  MATH  Google Scholar 

  19. Hennigh O, Narasimhan S, Nabian MA, Subramaniam A, Tangsali K, Fang Z, Rietmann M, Byeon W, Choudhry S (2021) Nvidia simnet\(^\text{TM}\): an ai-accelerated multi-physics simulation framework. In: International conference on computational science. Springer, pp 447–461. https://doi.org/10.1007/978-3-030-77977-1_36

  20. Schein A, Carlberg KT, Zahr MJ (2021) Preserving general physical properties in model reduction of dynamical systems via constrained-optimization projection. Int J Numer Methods Eng 122(14):3368–3399. https://doi.org/10.1002/nme.6667

    Article  MathSciNet  Google Scholar 

  21. Qian E, Kramer B, Peherstorfer B, Willcox K (2020) Lift & learn: Physics-informed machine learning for large-scale nonlinear dynamical systems. Physica D 406:132401. https://doi.org/10.1016/j.physd.2020.132401

    Article  MathSciNet  MATH  Google Scholar 

  22. Lee K, Trask N, Stinis P (2021) Machine learning structure preserving brackets for forecasting irreversible processes. Adv Neural Inf Process Syst 34:5696–5707

    Google Scholar 

  23. Sharma H, Wang Z, Kramer B (2022) Hamiltonian operator inference: Physics-preserving learning of reduced-order models for canonical Hamiltonian systems. Physica D 431:133122. https://doi.org/10.1016/j.physd.2021.133122

    Article  MathSciNet  MATH  Google Scholar 

  24. Jin P, Zhang Z, Zhu A, Tang Y, Karniadakis GE (2020) Sympnets: Intrinsic structure-preserving symplectic networks for identifying Hamiltonian systems. Neural Netw 132:166–179. https://doi.org/10.1016/j.neunet.2020.08.017

    Article  MATH  Google Scholar 

  25. Mattheakis M, Sondak D, Dogra AS, Protopapas P (2022) Hamiltonian neural networks for solving equations of motion. Phys Rev E 105:065305. https://doi.org/10.1103/PhysRevE.105.065305

    Article  MathSciNet  Google Scholar 

  26. Greydanus S, Dzamba M, Yosinski J (2019) Hamiltonian neural networks. Advances in neural information processing systems 32

  27. Rusch TK, Mishra S (2021) Unicornn: a recurrent model for learning very long time dependencies. CoRR (2021) arXiv:2103.05487

  28. Bradbury J, Frostig R, Hawkins P, Johnson MJ, Leary C, Maclaurin D, Necula G, Paszke A, VanderPlas J, Wanderman-Milne S, Zhang Q JAX: Composable Transformations of Python+NumPy programs. http://github.com/google/jax

  29. Brunton SL, Proctor JL, Kutz JN (2016) Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc Natl Acad Sci 113(15):3932–3937. https://doi.org/10.1073/pnas.1517384113

  30. Dormand JR, Prince PJ (1980) A family of embedded Runge–Kutta formulae. J Comput Appl Math 6(1):19–26. https://doi.org/10.1016/0771-050X(80)90013-3

    Article  MathSciNet  MATH  Google Scholar 

  31. Virtanen P, Gommers R, Oliphant TE, Haberland M, Reddy T, Cournapeau D, Burovski E, Peterson P, Weckesser W, Bright J, van der Walt SJ, Brett M, Wilson J, Millman KJ, Mayorov N, Nelson ARJ, Jones E, Kern R, Larson E, Carey CJ, Polat İ, Feng Y, Moore EW, VanderPlas J, Laxalde D, Perktold J, Cimrman R, Henriksen I, Quintero EA, Harris CR, Archibald AM, Ribeiro AH, Pedregosa F, van Mulbregt P (2020) SciPy 1.0 Contributors: SciPy 1.0: fundamental algorithms for scientific computing in python. Nat Methods 17:261–272. https://doi.org/10.1038/s41592-019-0686-2

  32. Petrov EP, Ewins DJ (2003) Analytical formulation of friction interface elements for analysis of nonlinear multi-harmonic vibrations of bladed disks. J Turbomach 125(2):364–371. https://doi.org/10.1115/1.1539868

    Article  Google Scholar 

  33. Biewald L (2020) Experiment tracking with weights and biases. Software https://www.wandb.com/

  34. Ramachandran P, Zoph B, Le QV (2017) Searching for activation functions. https://doi.org/10.48550/arXiv.1710.05941

Download references

Acknowledgements

This work was funded by Sandia National Laboratories, Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA-0003525. The University of California San Diego acknowledges Sub-Contract Agreement 2169310 from Sandia National Laboratories for its particpation in this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael D. Todd.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A: Autoencoder hyperparameters

Appendix A: Autoencoder hyperparameters

The autoencoder hyperparameters were obtained by running a grid search using the platform Weights and Biases [33]. The validation loss plotted as a function of epochs is shown in Fig. 17. For all of these runs, the nonlinear activation function was the Swish function [34]. Table 2 summarizes the results for all the iterations considered. The names of the iterations were automatically generated by Weights and Biases. Based on these results, the autoencoder used had 4 layers and 14 nodes per layer.

Fig. 17
figure 17

Validation loss as a function of epochs during training

Table 2 Summary of hyperparameter grid search results

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Najera-Flores, D.A., Todd, M.D. A structure-preserving neural differential operator with embedded Hamiltonian constraints for modeling structural dynamics. Comput Mech 72, 241–252 (2023). https://doi.org/10.1007/s00466-023-02288-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00466-023-02288-w

Keywords

Navigation