Bayesian Physics-Informed Neural Networks for real-world nonlinear dynamical systems

. Understanding real-world dynamical phenomena remains a challenging task. Across various scientiﬁc disciplines, machine learning has advanced as the go-to technology to analyze nonlinear dynamical systems, identify patterns in big data, and make decision around them. Neural networks are now consistently used as universal function approximators for data with underlying mechanisms that are incompletely understood or exceedingly complex. However, neural networks alone ignore the fundamental laws of physics and often fail to make plausible predictions. Here we integrate data, physics, and uncertainties by combining neural networks, physics-informed modeling, and Bayesian inference to improve the predictive potential of traditional neural network models. We embed the physical model of a damped harmonic oscillator into a fully-connected feed-forward neural network to explore a simple and illustrative model system, the outbreak dynamics of COVID-19. Our Physics-Informed Neural Networks can seamlessly integrate data and physics, robustly solve forward and inverse problems, and perform well for both interpolation and extrapolation, even for a small amount of noisy and incomplete data. At only minor additional cost, they can self-adaptively learn the weighting between data and physics. Combined with Bayesian Neural Networks, they can serve as priors in a Bayesian Inference, and provide credible intervals for uncertainty quantiﬁcation. Our study reveals the inherent advantages and disadvantages of Neural Networks, Bayesian Inference, and a combination of both and provides valuable guidelines for model selection. While we have only demonstrated these different approaches for the simple model problem of a seasonal endemic infectious disease, we anticipate that the underlying concepts and trends generalize to more complex disease conditions and, more broadly, to a wide variety of nonlinear dynamical systems.


Motivation
Modeling and predicting the behavior of complex nonlinear dynamical systems remains a challenging scientific task [24]. Numerous scientists are addressing this challenge by collecting more observational data than ever before; however, more often than not without a clear picture how to sort, analyze, and understand these enormous amounts of information [10]. Throughout the past two decades, machine learning has advanced as the go-to technology to analyze big data and explore the massive design spaces associated with them [1]. Simply put, machine learning is an incredibly powerful strategy to make datadriven recommendations and decisions based on the input data alone. As such, it is has become an indispensable technology for image or speech recognition, medical diagnostics, or self-driving cars. However, traditional machine learning ignores the fundamental laws of physics, and, as a consequence, performs well at fitting observations, but often fails to make consistent and plausible predictions. In the engineering science community, especially in disciplines that are traditionally not data-rich, these limitations have raised the question how we can advance machine learning and build in our prior knowledge to respect underlying physical principles.
The trend of embedding physics into machine learning has recently drawn tremendous attention in various scientific applications [25]. As engineering scientists, we have been modeling complex nonlinear dynamical systems for hundreds of years and we have developed a reasonable quantitative knowledge about physical parameters, boundary conditions, and constraints [26]. Physics-informed machine learning now allows us to use this prior knowledge when training machine learning tools [1]. This not only makes the training more efficient, but also more accurate and robust, especially when working with missing, noisy, or sparse real-life data [32]. A powerful and effective strategy to seamlessly integrate data and physics are Physics-Informed Neural Networks [27]. Physics Informed Neural Networks introduce a learning bias by directly embedding the physics into the loss function of a neural network [28]. This makes the neural network more robust, especially in the presence of sparse or imperfect data, and can provide more accurate and physically consistent predictions, even outside the training window [10]. For example, recent studies have successfully applied Physics Informed Neural Networks to study the complex outbreak dynamics of COVID-19 [4,31] by integrating advance epidemiology models into deep neural networks [11,34]. However, the success of these methods depends crucially on the amount of available data and the complexity of the system itself. For the COVID-19 pandemic, we now know that predictions typically fail beyond a two-week window [3,15], and that predicting outbreak dynamics beyond this range can have devastating consequences [7]. It seems intuitive to ask how we can quantify uncertainties to estimate the reliability of our models and to build confidence in our model predictions [17].
Real-world data are inherently stochastic, noisy, and incomplete, and naturally contain aleatoric uncertainty [20]. This is why we should never just look at the plain data. Instead, we should always interpret data in the context of models that allow us to quantifying uncertainties [19]. A recently proposed promising strategy is to interpret Physics Informed Neural Networks within the framework of Bayesian statistics [32]. This approach combines a Bayesian Neural Network with a Physics Informed Neural Network to create prior distributions and uses Hamilton Monte Carlo methods to estimate posterior distributions [33]. Uncertainty quantification in the form of credible intervals is a natural and important by-product of such a Bayesian analysis [5]. Throughout the past decades, Bayesian statistics have undeniably gained massive popularity [21] and have stimulated the birth to an entirely new field, probabilistic programming [22]. Combined with Physics Informed Neural Networks, Bayesian statistics perform well for both forward and inverse problems including Poisson problems, flow problems, phase separation problems, and reaction-diffusion problems [33]. However, most of these applications train their network on synthetic data from a known solution with overlaid noise [18]. This motivates the question how well Bayesian Physics Informed Neural Networks perform on real-word data from a nonlinear dynamical system for which the underlying physics are not entirely known, with both aleatoric and epistemic uncertainty.
The objective of our study is to illustrate the performance of Neural Networks, Bayesian Inference, and a combination of both using real-world data. However, rather than showcasing these methods on real big data, we focus on a simple data set that is easy to understand, interpret, and reproduce: the reported daily number of new COVID-19 cases worldwide [14] throughout the year of 2021 [9]. We identify a suitable, yet simple mathematical model with an analytical solution that allows us to characterize the data. Again, our objective is not to develop the most rigorous physical model to explain each and every feature of the outbreak dynamics of a global pandemic [8]. Instead, we adopt simple physical model, a damped harmonic oscillator, to characterize the long-term dynamics of COVID-19 on a global scale. We integrate the case data and the oscillator model using Neural Networks and Bayesian Inference and compare their model equations, parameters, training, predictions, and uncertainties. We envision that this study sheds light on the performance of different cutting-edge machine learning tools, highlights their advantages and disadvantages, and helps engineering scientists to select the best method to analyze real-world nonlinear dynamical systems.  Figure 1 illustrates our real-world data to compare different machine learning approaches. As example, we use the number of new COVID-19 cases worldwide [23] throughout the entire year of 2021, which we draw from a public database [9]. The thin fluctuating line shows the raw data, the reported daily new cases worldwide from January 1, 2021 to December 31, 2021. Global reporting started on January 22, 2020 with a total number of cases of 557 [13]. On January 1, 2021, at the beginning of our analysis window, the number of newly reported cases worldwide was 572,602 bringing the total number of cases up to 84,332,767. On December 31, 2021, at the end of our window, the total number of cases was 288,631,129. Altogether, the reporting window saw three pronounced waves. The first wave had a minimum number of new cases of 281,223 on day 46, February 16, 2021, and a maximum number of 905,378 on day 118, April 29, 2021, as we can confirm from the global minimum and maximum in Figure 1. The second wave had a a minimum number of new cases of 296,808 on day 172, June 22, 2021, and a maximum number of 819,336 cases on day 221, August 10, 2021, about four months after. To eliminate noise, reporting uncertainties, and weekday-weekend fluctuations, data analysts typically average the reported case numbers across a seven-day window [12]. The thick smooth line illustrates the seven-day moving averagex(t) of the daily new cases. In the following sections, we will use both the raw data and their seven-day moving average as an example to illustrate the potential of Neural Networks, Bayesian Inference, and both methods combined.

Neural Network modeling -Fully-connected feed-forward neural network
To model the data in Figure 1 without any prior knowledge of the underlying physics, we use a neural network that approximates the model solution x, the reported daily new COVID-19 cases, by taking the time coordinates t as input. We adopt one of the simplest examples of neural networks, a fully-connected feed-forward neural network composed of multiple hidden layers. We denote the hidden variables of the k th hidden layer as z k and summarize the neural network, here for two hidden layers, as where the output of the last layer is used to approximate the true solution, x ≈ z 3 . Here, σ (•) denotes the non-linear activation function for which we select a hyperbolic tangent function, σ (•) = tanh (•), and θ = {W k , b k } are the trainable network parameters, the weight matrix W k and the bias vector b k of the k th layer of the network. Throughout this project, for illustrative purposes, we consider a fully-connected feed-forward network with one input, the time t, two hidden layers with 32 nodes each, and one output, the daily new cases x(t). For this type of network, W 1 ∈ R 1×32 , W 2 ∈ R 32×32 , W 3 ∈ R 32×1 , and b 1 ∈ R 32 , b 2 ∈ R 32 , b 3 ∈ R 1 , resulting in 32 + 32 × 32 + 32 = 1088 weights and 32 + 32 + 1 = 65 biases.

Physic Informed modeling -Damped harmonic oscillator
To model the data in Figure 1 using a physics-based model, we consider a general parameterized second order differential equation, where t ∈ [ 0, T ] is the time, x(t) is the solution with initial conditions x(t 0 ) = x 0 anḋ x(t 0 ) =ẋ 0 , r is the residual as a function of x and its first and second time derivativesẋ andẍ, and ϑ are the physics parameters. We now specify the general equation (2) for the model problem of a damped harmonic oscillator, which is governed by Newton's second law, the balance of forces, mẍ + cẋ + k x = 0, or, divided by the mass m, We can parameterize the damped harmonic oscillator equation in terms of the mass m, the viscous damping coefficient c, and the stiffness k, or, equivalently, in terms of the damping ratio ζ = δ/ω 0 = c/(2 √ mk), the damping δ = c/(2m), and the angular frequency ω 0 = √ k/m. Equations (3.1) and (3.2) are homogeneous linear differential equations of second order with constant coefficients. For solutions to this type of equations, we can make an exponential ansatz of the form x(t) = C exp(λ t), such thatẋ(t) = λ C exp(λ t) andẍ(t) = λ 2 C exp(λ t). Inserting this ansatz into equations (3.1) and (3.2) yields their characteristic equations, These equations are quadratic equations in normal form with two solutions for λ, From equation (5.2) we conclude that the damping ratio ζ = δ/ω 0 , the ratio between damping δ and frequency ω 0 , determines the type of the solution and with it the behavior of the system: A damped harmonic oscillator can be overdamped for ζ > 1 and δ > ω 0 with two different real valued solutions for λ; critically damped for ζ = 1 and δ = ω 0 with two identical real valued solutions for λ, or underdamped for ζ < 1 and δ < ω 0 with two conjugate complex solutions for λ. Here we consider the underdamped case. The analytical solution of an underdamped harmonic oscillator describes an exponential decay of the oscillation, where A 0 is the amplitude, ω = ω 2 0 − δ 2 is the natural frequency, and φ = arctan (−δ/ω) is the phase angle. Figure 2 illustrates the dynamics of a damped harmonic oscillator. The solid line highlights the analytical solution mẍ + cẋ + k(x − x 0 ) = 0 orẍ + 2 ζω 0ẋ + ω 2 0 (x − x 0 ) = 0, the dashed lines highlight the exponential decay in amplitude, x 0 ± A 0 exp(−δt), where A 0 is the amplitude, δ = c/(2m) is the damping, ω 0 = √ k/m is the frequency associated with the period T = 2π/ω 0 , and x 0 is the offset. We fix the value of the mass to m = 1 and collectively summarize the remaining physics parameters in the parameter vector ϑ = {c, k, x 0 }.

Machine Learning -Integrating data and physics
To create learning machines that seamlessly integrate the data from Figure 1, the physics from Figure 2, or both, we combine Neural Network modeling from Section 2.2 and Physics Informed modeling from Section 2.3 and learn the underlying network and physics parameters. To quantify the uncertainty of the learned parameters, and with it the quality of the model, we can integrate each of these methods into a Bayesian Inference. Neural Networks in the top row approximate the training data of the first 225 days well, but fail to predict the behavior outside the training window of the remaining 140 days. The neural network model simply continues the learned trend. Since day 225 corresponds to a plateau with a horizontal tangent, the neural network model predicts a flatline with constant case numbers for the remaining part of the year. This trend is similar when trained on both the daily case data in the top left and the seven-day average in the top right.
Physics Informed Neural Networks in the middle row approximate both the training data of the first 225 days and the behavior outside the training window of the remaining 140 days with a good accuracy. Throughout the entire year, the red dashed lines of the model remain close to the orange line of the seven-day case average. The fit of the Physics Informed Neural Network is virtually identical for training on the daily case data in the middle left and on the seven-day average in the middle right.
Bayesian Inference in the bottom row not only approximates both, the training data and the behavior outside the training window, but also provides credible intervals to estimate the quality of the fit. Since the Bayesian Inference uses the same underlying physics as the Physics Informed Neural Network, its red dashed lines also remain close to the orange line of the seven-day case average throughout the entire year. However, the fit with respect to Figure 3: Machine learning -Integrating data and physics. Neural Networks approximate the training data well, but fail to predict the behavior outside the training window, when trained on both, the daily case data (top left) and the seven-day average (top right). Physics Informed Neural Networks approximate both the training data and the behavior outside the training window, when trained on both, the daily case data (middle left) and the seven-day average data (middle right). Bayesian Inference not only approximates both, the training data and the behavior outside the training window, but also provides credible intervals, which are wider when model parameters are inferred using the daily case data (bottom left) than when using the seven-day average data (bottom right). Thin yellow lines indicate daily new case data, thick orange lines are their seven-day average, dashed red lines are the model simulations, light blue areas are its credible intervals, blue dots are the training data, here for 225 out of 365 days. the training data is worse compared to the Physics Informed Neural Network, because we bias the solution towards the physics equations. In contrast to the previous two models in the top and middle rows, the Bayesian Inference displays a clear difference between training on raw versus averaged data. The light blue areas of its credible intervals are notably wider when the model parameters are inferred using the daily case data in the bottom left than when using the seven-day average data in the bottom right. We will now describe details of the underlying equations for Neural Networks in Section 3 and for Bayesian Inference in Section 4.

Neural Network modeling
The objective of Neural Network modeling is to train the neural network (1) such that the model output x(t) best approximates the datax, by minimizing a loss function, through iteratively updating the set of model parameters Θ. The set of model parameters could consist of the network parameters θ, the physics parameters ϑ, and a weighting coefficient ε, where W k and b k are the network weights and biases from equation (1) and c, k, and x 0 are the physical damping, stiffness, and offset from equation (3). We specify the loss function (7) and the relevant parameters (8) for classical Neural Networks in Section 3.1, for Physics Informed Neural Networks in Section 3.2, and for Self Adaptive Physics Informed Neural Networks in Section 3.3.

Neural Networks
The objective of classical Neural Networks is to learn the network parameters θ, without prior knowledge of the underlying physics, by minimizing a loss function (7) that consists of a single term L data , The data loss L data penalizes the error between model x(t) and datax. We define it as the mean square error, the L 2 -norm of the difference between model and data, || x(t) −x || 2 , divided by the number of training points n trn , We minimize the loss function (9) in terms of the mean square error (10) to optimize the network parameters θ, the network weights and biases W k and b k , using the ADAM optimizer, an adaptive algorithm for gradient-based first-order optimization.

Physics Informed Neural Networks
The objective of Physics Informed Neural Networks is to learn both the network parameters θ and the physics parameters ϑ by simultaneously training a neural network (1) and solving an underlying physics equation (3). We convert both into a single optimization problem that minimizes the loss function L to learn the parameters Θ = { θ, ϑ }, The first term, the data loss L data , penalizes the error between model x(t) and datax, the second term, the physics loss L phys , penalizes the physics residual r. Since both terms can be of a different order of magnitude, we weight them with the weighting coefficients ( 1 − ε ) and ε. For the first term, we use the mean square error, the L 2 -norm of the difference between model and data, || x(t) −x || 2 , divided by the number of training points n trn similar to Section 3.1, For the second term, we use the L 2 -norm of the residual, || r || 2 , divided by the total number of sampling points n smp , (14) A key step in calculating the physical loss function (14) is to compute the time derivatives of the model solution x,ẋ = dx/dt andẍ = dẋ/dt, which we address using automatic differentiation. Automatic differentiation is part of various deep learning frameworks which makes it convenient for Physics Informed Neural Networks to evaluate derivatives. We minimize the overall loss function (12) to optimize the network parameters θ, the network weights W k and biases b k , and the physics parameters ϑ, the viscous damping c, stiffness k, and offset x 0 , again using the ADAM optimizer, an adaptive algorithm for gradient-based first-order optimization.

Self Adaptive Physics Informed Neural Networks
The objective of Self Adaptive Physics Informed Neural Networks is to train a neural network (1) and solve an underlying physics equation (3) similar to Section 3.2, but now, rather than prescribing the weighting coefficient ε between data and physics, we learn it as a function of time t, along with the other parameters, Θ = { θ, ϑ, } [16]. We adopt the same loss function L as in Section 3.2, in terms of the data loss, the weighted error between model x and data x data , and the physics loss, the weighted norm of the physics residual r, We minimize the overall loss function (16) to optimize the network parameters, W k and b k , the physics parameters, c, k, and x 0 , and the weighting term ε(t), The dynamics of the weighting term ε(t) provide insight into the relative importance of data loss L data and physics loss L phys and into the change of both terms in time. Figure 4 illustrates the features the Neural Network models of this section. The classical Neural Networks from Section 3.1 minimize a loss function, L → min, that characterizes the data loss L data , the error between data and model ||x − x(t) || shown on the left. The

Comparison of Neural Network models
Physics Informed Neural Networks from Section 3.2 build additional physics loss tersm L phys into the loss function L, for example, the physics residual || r || shown on the right. In a forward problem, Neural Networks learn the network parameters θ = {W k , b k }. In an inverse problem, Neural Networks learn the network parameters θ = {W k , b k }. the physics parameters ϑ = {c, k, x 0 }. Self Adaptive Physics Informed Neural Networks from Section 3.3 also learn the dynamic weighting term ε(t) between data and physics loss L data and L phys . Neural Networks minimize a loss function L → min that could consist of two terms, a data loss L data , the error between data and model ||x − x(t) ||, and a physics loss L phys , the physics residual || r ||, to learn the network parameters θ = {W k , b k }, the physics parameters ϑ = {c, k, x 0 }, and possibly even the dynamic weighting term ε(t) between L data and L phys . Figure 5 illustrates the result of a grid search for the Physics Informed Neural Network.
Here we show the simulation with one and two hidden layers, left and right columns, with two, four, and eight nodes each, from top to bottom. The number of network weights W k and biases b k increases with increasing number of layers and nodes. The one-hidden-layer models have 7, 13, and 25 parameters and the two-hidden-layer models have 13, 33, and 97 parameters. The six graphs confirm our intuition that the performance of the Physics Informed Neural Network increases with increasing number of hidden layers, from left to right, and increasing number of nodes per layer, from top to bottom. For both models, we also performed simulations with 16 and 32 nodes, but the performance did not increase markedly beyond the eight-node models. Figure 5 suggests that the one-hidden-layer models perform poorly, independent of the number of nodes. For each additional hidden layer, the number of parameters increases with the number of nodes squared and the risk of overfitting increases. The two-hidden-layer models seem to be a reasonable compromise between underfitting and overfitting: They perform well, even with only eight nodes and 97 parameters. Throughout this manuscript, we use Neural Networks with two hidden layers and 32 nodes.  The classical Neural Network provides a good approximation of the training data during the first 225 days of the year, but fails to predict the behavior outside the training window. During the remaining 140 days, the classical Neural Network simply continues the learned trend, which, on day 225, suggests a plateau with constant case numbers for the remaining part of the year. For positive weighting coefficients ε > 0, the loss function accounts for both data loss L data and physics loss L phys . With increasing weighting coefficients ε = [ 10 −∞ , 10 −5 , 10 −4 , 10 −3 , 10 −2 , 10 −0 ], from top left to bottom right, the influence of the data loss L data decreases and the influence of the physics loss L phys increases. In other words, the smaller the weighting coefficient ε, the better the fit of the data; the larger the weighting coefficient ε, the better the fit of the physics. For the special case of ε = 1 in the bottom right, the loss function degenerates to the loss function of the physical model L → L phys from Section 2.3. The network solves the physics of the damped harmonic oscillator without accounting for any data, and converges to the trivial solution with a zero amplitude. Figure 7 quantifies the contributions to the loss function for Physics Informed Neural Networks with weighting coefficients ε varying from 10 −6 to 10 −2 . In Physics Informed Neural Networks, the loss function, L = ( 1 − ε ) L data + ε L phys , is a weighted average of the databased mean square error, L data , summarized in the left panel, and the physics-based residual, L phys , summarized in the right panel. Each panel separately reports the descriptive loss, the sum of the daily errors in the training window as orange dots, and the predictive loss, the sum of the daily errors in the prediction window as red dots, With increasing weighting coefficients ε, the model emphasizes less on the data fit and more on the physics fit. The predictive data loss, descriptive physics loss, and predictive physics loss decrease, while the descriptive data loss increases. The quantitative comparison of the four losses in Figure 7 agrees with the qualitative observation in Figure 6: For smaller weighting coefficients ε, the red dashed lines of the model and orange lines of the seven-day case data average are in close proximity during the first 225 days of the descriptive regime, but deviate notably during the remaining 140 days of the predictive regime. For larger weighting coefficients ε, the red and orange lines deviate more during the first 225 days of the descriptive regime, but are in close proximity during the remaining 140 days of the predictive regime. This example highlights the importance of selecting an appropriate weighting coefficient ε, which can bias the result of a Physics Informed Neural In Physics Informed Neural Networks, the loss function, L = ( 1 − ε ) L data + ε L phys , is a weighted average of the data-based mean square error, L data , and the physics-based residual, L phys . With increasing weighting coefficients ε, the model emphasizes less on the data fit and more on the physics fit; the predictive data loss, descriptive physics loss, and predictive physics loss decrease, while the descriptive data loss increases. Orange dots indicate the descriptive losses L data and L phys for the first 255 days; red dots indicate the predictive losses L data and L phys for the following 140 days.
Network either towards the data or towards the physics. An ideal weighting coefficient minimizes the sum of all four loss terms. From Figure 7, we can estimate a reasonable weighting coefficient on the order of ε ≈ 10 −3 . This value is of the same order of magnitude as the learned weighting term ε(t) with mean and standard deviation of ε = 0.007 ± 0.003 for the self-adpative Physics Informed Neural Network from Section 3.3.

Bayesian Inference modeling
The objective of Bayesian Inference is to estimate the posterior probability distribution of a set of the parameters Θ, such that the statistics of the model x(t) agree with datax [2] and possibly also satisfy the physics r, by maximizing the prior-weighted likelihood P(x, r|Θ), Here, P(x, r|Θ) is the likelihood of the datax and P(r|Θ) the likelihood of the physics r for given fixed parameters Θ; P(Θ) is the prior, the probability distribution of the model parameters Θ; P(x) and P(r) are the marginal likelihood or evidence; and P(Θ|x, r) is the posterior, the conditional probability of the parameters Θ for the given datax and physics r. The first likelihood function P(x|Θ) evaluates the quality of fit between the model output x(t), here the simulated number of new cases for given parameters Θ, and the observed datax, here the reported number of new cases at every day i. For the daily The second likelihood function P(r|Θ) evaluates how accurately the model output x(t), here the simulated number of new cases for given parameters Θ, satisfies the physics equation r(t) = 0 at every day i. Again, we define the daily likelihood function p i (r|Θ), at each time point t i , through a normal distribution, N (µ, σ), where the mean µ is the model result of the physics equation r(x(t i )) for the given parameter set Θ and the likelihood width is the standard deviation σ. The product ∏ n i=0 of all daily likelihood functions p i (r|Θ), defines the overall likelihood P(r|Θ), Similar to the previous section on Neural Network modeling, the set of model parameters could consist of network parameters θ and physics parameters ϑ, where W k and b k are the network weights and biases from equation (1) and c, k, and x 0 are the physical damping, stiffness, and offset from equation (3). We specify the Bayesian Inference (20) and the prior distributions of its parameters (23) for classical Bayesian Inference in Section 4.1, for Bayesian Neural Networks in Section 4.2, and for Bayesian Physics Informed Neural Networks in Section 4.3.

Bayesian Inference
The objective of a classical Bayesian Inference is to estimate the posterior parameter distributions P(ϑ|x) of a set of physics parameters ϑ such that the statistics of a physical model x(t) agree with datax. For the physical model, we assume that the number of daily new cases x(t) follows the physics of a damped harmonic oscillator from Section 2.3, mẍ + cẋ + k x = 0, where m is the mass, c is the viscous damping coefficient, and k is the stiffness. Without loss of generality, we set m ≡ 1. We consider the underdamped case, for which the damping ratio is smaller than one, c 2 < 4 k. For this case, following equation (6), the analytical solution for the number of daily new cases is The physics of the number of new cases are uniquely determined by three parameters, ϑ = {c, k, x 0 }, the viscous damping c, the stiffness k, and the offset x 0 .We now use Bayes' theorem, to estimate the posterior probability distribution of the physics parameters ϑ such that the statistics of the model x(t) from equation (24) agree with the reported case datax in Figure 1, Here P(x|ϑ) is the likelihood, the conditional probability of the datax for given fixed physics parameters ϑ; P(ϑ) is the prior, the probability distribution of the physics parameters ϑ; P(x) is the marginal likelihood or evidence; and P(ϑ|x) is the posterior, the conditional probability of the physics parameters ϑ for the given datax. For the likelihood P(x|ϑ), the product of the individual point-wise likelihoods p i (x|ϑ) according to equation (33), we select a normal distribution N (µ, σ), where the mean µ is the model result x(t i ) for the given physics parameters ϑ and the likelihood width σ takes a half Cauchy distribution, σ = halfCauchy(fi = 1.0), For the prior probability distributions P(ϑ), we select independent weakly informed priors with log-normal distributions for the three physics parameters, ϑ = {c, k, x 0 }, the viscous damping c, the stiffness k, and the offset x 0 , Using the case data from Figure 1, we approximately infer the posterior distributions from Bayes theorem (25) by employing a NO-U-Turn sampler [6] implementation of the Python package PyMC3 [29]. We use two chains with the first 2000 samples to tune the sampler and the subsequent 4000 samples to estimate the set of parameters ϑ, and employ a target acceptance rate of 0.85.

Bayesian Neural Networks
The objective of Bayesian Neural Networks is to estimate the posterior parameter distributions P(θ|x) of a set of network parameters θ, without any prior information about the underlying physics, such that the statistics of the output of the neural network x(t) agree with datax. For the neural network, we use the fully-connected feed-forward neural network composed of multiple hidden layers of Section 2.2 to approximate the output, the number of daily new cases x, from the input, the time coordinates t. We denote the hidden variables of the k th hidden layer as z k and summarize the neural network, here for two hidden layers, as where the output of the last layer approximates the true solution, x ≈ z 3 . Similar to the previous sections, σ (•) denotes the non-linear activation function for which we select a hyperbolic tangent function, σ (•) = tanh (•), and W k and b k are the network weights and biases of the k th layer. We use Bayes' theorem, to estimate the posterior probability distribution of the network parameters θ such that the statistics of the neural network x ≈ z 3 from equation (28) agree with the reported case datax in Figure 1, Here P(x|θ) is the likelihood, the conditional probability of the datax for given fixed parameters θ; P(θ) is the prior, the probability distribution of the network parameters θ; P(x) is the marginal likelihood or evidence; and P(θ|x) is the posterior, the conditional probability of the parameters θ for the given datax. For the likelihood P(x|θ), the product of the individual daily point-wise likelihoods p i (x|θ) according to equation (33), we select a normal distribution N (µ, σ), where the mean µ is the model result x(t i ) for the given network parameters θ and the likelihood width σ is the standard deviation with σ = 0.05, For the prior probability distributions P(θ), we select independent weakly informed priors with normal distributions with a zero-mean for the two sets of network parameters, θ = {W k , b k }, the network weights W k and biases b k [33], Using the case data from Figure 1, we approximately infer the posterior distributions from Bayes theorem (29) by employing a Hamiltonian Monte Carlo sampling [33] using Tensorflow-Probability [30]. We use the first 3000 samples to tune the sampler and the subsequent 3000 samples to estimate the set of parameters θ.

Bayesian Physics Informed Neural Networks
The objective of Bayesian Physics Informed Neural Networks is to estimate the posterior parameter distributions P(Θ|x, r) of both network and physics parameters, Θ = { θ, ϑ }, such that the statistics of Physics Informed Neural Network x(t) agree with datax and satisfy the physics equation r. As such, the Bayesian Physics Informed Neural Network integrates the Physics Informed Neural Network from Section 3.2 into a Bayesian Inference. We use Bayes' theorem, to estimate the posterior probability distribution of the parameters such that the statistics of the neural network x ≈ z 3 from equation (1) agree with the reported case datax in Figure 1, Here P(x, r|Θ) = P(x|Θ) P(r|Θ) are the two likelihood functions, the conditional probabilities of datax and physics r for given parameters Θ; P(Θ) = P(θ) P(ϑ) are the priors, the probability distribution of the network and model parameters θ and ϑ; P(x) P(r) is the marginal likelihood or evidence; and P(Θ|x, r) is the posterior, the conditional probability of the parameters Θ for given datax and physics r. The first likelihood function P(x|Θ) evaluates the quality of fit between the model output x(t) and the observed datax at every day i as a product of the daily likelihood functions p i (x|Θ), for which we select a normal distribution, The second likelihood function P(r|Θ) evaluates how accurately the model output x(t) satisfies the physics equation r = 0 at every day i as a product of the daily likelihood functions p i (r|Θ), for which we also assume a normal distribution, For the prior probability distributions P(Θ), we select independent weakly informed priors with log-normal distributions for the three physics parameters, ϑ = {c, k, x 0 }, normal distributions for the network parameters, θ = {W k , b k }, Using the case data from Figure 1, we approximately infer the posterior distributions from Bayes theorem (29) by employing a Hamiltonian Monte Carlo sampler implementation of Tensorflow-Probability [30]. For the Hamiltonian Monte Carlo sampler, we use 50 leapfrog steps with an initial time step of dt = 5 · 10 −4 , and set the burn-in steps and total number of samples to 3000. We use the first 3000 samples to tune the sampler and the subsequent 3000 samples to estimate the set of parameters Θ. Figure 9 illustrates the features the Bayesian Inference models of this section. The classical Bayesian Inference from Section 4.1 maximizes a prior-weighted likelihood, P(x|ϑ), in terms of the error between data and model ||x − x(t) || to infer distributions of the physics parameters ϑ. The Bayesian Neural Networks from Section 4.2 maximize a prior-weighted likelihood, P(x|θ), in terms of the error between data and model ||x − x(t) || to infer distributions of the network parameters θ. The Bayesian Physics Informed Neural Networks from Section 4.3 maximize a prior-weighted likelihood, P(x, r|Θ), in terms of the error between data and model ||x − x(t) ||, shown on the left, and the error of the physics || r ||, shown on the right, to infer the distributions of both network and physics parameters θ and ϑ.  First and foremost, we note that all six panels display a reasonable agreement between data and model, both inside the training window and beyond. With an increasing size of training data, from top left to bottom right, the error in the descriptive window increases and the error in the predictive window decreases. For example, the first peak of 905,378 cases on day 118, April 29, 2021, is best approximated for the smallest training data set in the top left, while the second peak of 819,336 cases on day 221, August 10, 2021, is best approximated for the largest training data set in the bottom right. In agreement with our expectation, the width of the credible interval, the light blue area, decreases with an increase in training data: The more data we use to train the model, the better its fit.    Figure 11, the damping is 18% larger, the stiffness is 21% smaller, and the offset is 6% larger. Notably, all three parameters of the Bayesian Physics Informed Neural Network show wider standard deviations than in the classical Bayesian Inference, the damping by a factor 3, stiffness by a factor 23, and offset by a factor 10. These observations suggests that, for the chosen training window of n trn = 225 days, the Bayesian Physics Informed Neural Network is not only more expensive than the Bayesian Inference, but also less robust.

Discussion
Understanding the behavior of nonlinear dynamical systems remains a challenging task. Machine learning has emerged as a powerful technology to provide insight into  observational data. A vast variety of learning machines have been developed throughout the past decade, but to the unexperienced user, selecting the right model for a specific task is an overwhelming endeavor. This is amplified by the fact that tutorials or publications typically introduce new methods using artificially generated synthetic data for which the solution is already known. The objective of this study was to compare different families of models for real-world data, the reported number of COVID-19 cases throughout the year of 2021, using a simple dynamical model, a damped harmonic oscillator. We reviewed the underlying model equations, parameters, and characteristic results of two families of models, Neural Networks and Bayesian Inference, each with three members of increasing complexity. Both Neural Networks and Bayesian Inference perform equally well on the raw daily case data with noise, reporting uncertainties, and weekday-weekend fluctuations and on their smooth seven-day average, as Figure 3 confirms. Our learned damping, stiffness, and offset of c = 1.251, k = 374.6, and x 0 = 0.558 for the Physics Informed Neural Network in Section 3.2, Figure 8, agree well with the inferred means of c = 1.111, k = 402.4, x 0 = 0.541 for the Bayesian Inference in Section 4.1, Figure 11, and with the combination of both methods of c = 1.312, k = 319.4, x 0 = 0.571 for the Bayesian Physics Informed Neural Network in Section 4.3, Figure 12. Our study confirms our general intuition that the quality of the fit increases with increasing size of the training data set. For the classical Bayesian Inference in Section 4.1, Figure 10 shows that the credible intervals become progressively narrower as the size of the training data increases. Figure 13 compares the convergence of the three Physics Informed Neural Network models of our study, the plain Physics Informed Neural Network from Section 3.2 the Self Adaptive Physics Informed Neural Network from Section 3.3, and the Bayesian Physics Informed Neural Network from Section 4.3. Clearly, for all three methods, the prediction Figure 13: Comparison of Physics Informed Neural Network models. Convergence with increasing size of training data set. Predictive data loss, L data , as mean squared error in the predictive window from t n trn to t n smp , for Physics Informed Neural Network, Self Adaptive Physics Informed Neural Network, and Bayesian Physics Informed Neural Network. For all three methods, the predictive data loss decreases with an increasing number of training data points. The Physics Informed Neural Network performs poorly initially, but converges with increasing training set size. The Self Adaptive Physics Informed Neural Network performs well, even for small training sets. Both methods perform similarly with increasing training set size. The Bayesian Physics Informed Neural Network performs reasonably well for small data sets but only improves marginally with increasing training set size. Training data n trn = [150,175,200,225,250,275,300].
error decreases with increasing size of the training data set. We close our study with a direct side-by-side comparison of all six methods from Sections 3 and 4 in Figure 14. Figure 15 provides a final recommendation to improve the performance of the Bayesian Physics Informed Neural Networks by scaling the physics equation. Table 1 summarizes the results of Figures 14 and 15 and discusses the advantages and disadvantages of each method.
Neural Networks are a simple and robust tool to fit training data, but have a poor predictive potential. Classical Neural Networks have remarkable power to fit a network model to data without any underlying physics. By minimizing a loss function that characterizes the error between model and data, the neural network learns the network parameters θ = {W k , b k }, the network weights and biases. For our example of a feed-forward network with one input, two hidden layers with 32 nodes each, and one output, it has 32+32×32+32=1088 weights and 32+32+1=65 biases, resulting in a total of 1153 unknowns. Figure 14, top left, suggests that even in the absence of any prior physical knowledge, the classical Neural Network provides an excellent approximation of the training data. However, it fails to predict the behavior outside the training window, where it simply continues the linear horizontal trend from the last set of training data points. This becomes particularly clear when the underlying problem is non-monotonic, or, like in our case, even oscillatory. Since classical Neural Networks provide no credible intervals, we have no way of knowing, how poor the predictive potential of the network really is.
Physics Informed Neural Networks integrate data and physics and have a good predictive potential. Physics Informed Neural Networks fit a network model to data, and, at the same time, to a physics-based model. If we know the underlying physics, they are an effective tool to constrain a deep learning method to a lower-dimensional manifold Figure 14: Comparison of Neural Network and Bayesian Inference models. Neural Networks approximate the training data well, but fail to predict the behavior outside the training window (top left). Physics Informed Neural Networks approximate both the training data and the behavior outside the training window, but are sensitive to the weighting coefficient (middle left). Self Adaptive Physics Informed Neural Networks approximate both the training data and the behavior outside the training window, and simultaneously learn the weighting term ε(t) (bottom left). Bayesian Inference not only approximates both, the training data and the behavior outside the training window, but also provides credible intervals of the model (top right). Bayesian Neural Networks approximate the training data well, but fail to predict the behavior outside the training window where they generate a wide credible intervals (middle right). Bayesian Physics Informed Neural Networks approximate the training data well, but fail to predict the behavior outside the training window (bottom right). Thin yellow lines indicate daily new case data, thick orange lines are their seven-day average, dashed red lines are the model, light blue areas are its credible intervals, blue dots are the training data. and create models that can be trained effectively with a small amount of data. Physics Informed Neural Networks enforce physics via soft penalty constraints. By minimizing a loss function that characterizes the error between model and data and the error in satisfying the physics, they simultaneously learn the network parameters θ = {W k , b k } and the physics parameters ϑ = {c, k, x 0 }. For our example, the network has 1088 weights and 65 biases, and the physics model has a damping, stiffness, and offset parameter, resulting in a total of 1156 unknowns. Figure 14, middle left, suggests that the Physics Informed Neural Network approximates both the training data and the behavior outside the training window reasonably well and is equally capable of both interpolation and extrapolation. From Figure 13 we conclude that Physics Informed Neural Networks require a relatively large set of training data. For our example, they perform moderately initially, but converge well with increasing training set size. However, from Figures 6 and 7, we conclude that Physics Informed Neural Networks are sensitive to the weighting coefficient ε that can bias the solution to emphasize on either data or physics. From Figure  15, we add that they are also sensitive to a scaling of the physics parameters ϑ, which, if done appropriately, can improve the performance of the model. An inherent limitation of plain Physics Informed Neural Networks is that they are not equipped with built-in  uncertainty quantification which may restrict their applications, especially in situations with noisy data.
Self Adaptive Physics Informed Neural Networks provide adaptive and robust fits of data and physics. Self Adaptive Physics Informed Neural Networks adaptively fit a network model to data, and, at the same time, to a physics-based model. They inherit all the advantages of Physics Informed Neural Networks and address the limitation of bias between data and physics by introducing the weighting coefficient as independent time-varying unknown. This allows them to perform well, even in regions with steep gradients, while using a smaller number of training epochs. Self Adaptive Physics Informed Neural Networks minimize a loss function that characterizes the error between model and data and between model and physics, and learn the network parameters θ = {W k , b k }, the physics parameters ϑ = {c, k, x 0 }, and the weighting term ε(t) between network and physics. For our example, the network model has 1088 weights and 65 biases, the physics model has a damping, stiffness, and offset, and the self adaptive model has a weighting coefficient, resulting in a total of 1157 unknowns. Figure 14, bottom left, confirms that the Self Adaptive Physics Informed Neural Network approximates both the training data and the behavior outside the training window, similar to the regular Physics Informed Neural Network. With only one additional time-varying parameter, the self-adaptively learned weighting term, it performs well even for small training data sets, as Figure 13 confirms. One caveat is that Self Adaptive Physics Informed Neural Networks involve training with complex loss functions that consist of multiple terms and result in a highly non-convex optimization problem. During training, these terms compete with one another which implies that the training process may not be robust and stable, and does not always converge to a global minimum. Self Adaptive Physics Informed Neural Networks perform best when the equations are well scaled and pre-conditioned and their parameters all lie within the same range of magnitude. While we have not explicitly shown the effect of different pre-conditioning and scaling techniques here, we here (32+32×32+32+32+32+1)×2=2306 unknowns good fit of training data, good predictive potential good fit of training data, poor predictive potential sensitive to choice of weighting coefficient ε credible intervals provide insight into reliable regimes
Bayesian Inference fits a physics based model to data and provides credible intervals. Classical Bayesian Inference is a simple, robust, and stable method that fits a physics based model to data. This implies that we know the underlying physics. In our example, we do not know the exact physics; however, we hypothesized that the data, the daily new cases of COVID-19 worldwide, follow the dynamics of a damped harmonic oscillator. Bayesian Inference then uses Bayes' theorem to infer the distribution of a set of model parameters, ϑ = {c, k, x 0 }, that best explain the data. For our example, we have assumed log normal distributions for each parameter, with individual means µ and standard deviations σ, introducing a total of 3 × 2 = 6 unknowns. Figure 14, top right, demonstrates that, provided the physics-based model is reasonable, the Bayesian Inference approximates the behavior well throughout the entire time window. Instead of only inferring a point value for each parameter, Bayesian Inference infers parameter distributions and credible intervals that provide valuable insight into the goodness of the fit as we have seen in the sensitivity analysis for varying training data set sizes in Figure  10: The narrower the credible interval, the better the fit.
Bayesian Neural Networks fit a network model to data and provide credible intervals. Bayesian Neural Networks fit a network model to data without any underlying physics. They use Bayes' theorem to infer the distribution of a set of network parameters, θ = {W k , b k } that best explain the data. For our example, a feed-forward network with one input, two hidden layers with 32 nodes each, and one output, the network has 1088 weights and 65 biases. We have assumed log-normal distributions for each parameter, with individual means µ and standard deviations σ, introducing a total of 1053 × 2 = 2306 unknowns. Figure 14, middle right, shows that the Bayesian Neural Network shares the features of classical Neural Networks and Bayesian Inference. It approximates the training data well, but fails to predict the behavior outside the training window. The wide credible intervals highlight the poor predictive potential of the model and confirm that any prediction a few days beyond the training window generates unreliable results.
Bayesian Physics Informed Neural Networks provide a good fit and prediction, but are sensitive to scaling. Bayesian Physics Informed Neural Networks fit both network and physics, and, in addition, provide credible intervals that provide insight into the quality of the fit. They use Bayes' theorem to learn the distributions of both network parameters, θ = {W k , b k } and physics parameters, ϑ = {c, k, x 0 }, with means µ and standard deviations σ each, resulting in a total of 2312 unknowns. Figure 14, bottom right, suggests that Bayesian Physics Informed Neural Networks provide a good fit of the training data and have a reasonable predictive potential outside the training window. The narrower credible intervals compared to the Bayesian Neural Network indicate that including physics increases the performance, especially in the prediction window. Bayesian Physics Informed Neural Networks are a powerful tool when used appropriately. They combine the advantages of all methods, and perform best, albeit at a larger computational cost. Since they have the largest set of parameters, their true performance is sensitive to an appropriate scaling and pre-conditioning and to the size of the training data set, as we can conclude from their convergence curve in Figure 13. However, when scaled appropriately, in our example such that the parameter k associated with the unknown x(t) becomes unity, their performance increases drastically, as we conclude from the narrow credible intervals of the scaling study in Figure 15.
Taken together, our study has shown that by embedding physical principles into a neural network architecture, we can generate effective models that train well, even with a small amount of data. Physics-Informed Neural Networks seamlessly integrate data and physics, robustly solve forward and inverse problems, and perform well for both interpolation and extrapolation. At only minor additional cost, they can self-adaptively learn the weighting between data and physics and smoothly integrate real-world data and physics-based modeling. Combined with Bayesian Neural Networks, Physics-Informed Neural Networks can serve as a prior in a general Bayesian framework, and can generate estimators for posterior distributions that provide valuable insight into uncertainty quantification. Here we have only demonstrated these features for the simple model problem of a seasonal endemic infectious disease, but it is easy to see how the underlying concepts and trends would generalize to more complex disease conditions and, more broadly, to a wide variety of nonlinear dynamical systems.