Learning Parameterized ODEs From Data

In contemporary research, neural networks are being used to derive Ordinary Differential Equations (ODEs) from observations. However, parameterized ODEs pose a more significant challenge than non-parameterized ODEs since the networks are required to understand the roles of the parameters, i.e., the structure of the equations. This paper proposes a novel approach by combining Symbolic Neural Network (S-Net) with ODE Solver to solve this issue. First, S-Net learns the structure of the parameterized ODEs and then predicts the dynamics based on the new parameters with the new initial states. To assess its performance, we compare our approach with a widely used Ordinary Neural Network (O-Net) that directly learns and predicts ODEs. Our numerical experiments demonstrate that our approach outperforms O-Net when applied to the Lotka-Volterra and Lorenz equations.


I. INTRODUCTION
The combination of deep learning and differential equations is a highly promising research direction. Deep learning can help uncover the underlying differential equations governing the behavior of many systems, such as electromagnetism, aerodynamics, weather prediction, and geophysics. Differential equation-guided network design can significantly improve model interpretability and generalization performance. The potential impact of this approach on scientific discovery and innovation is immense.
Chen et al. [1] introduced Neural Ordinary Differential Equations (NODEs), a new family of deep networks. They used a neural network to parameterize the derivative of the hidden state, serialized the neural network layers and parameters, and used the adjoint ODE method to optimize the neural network instead of back-propagation, which saved memory. Their work inspired research on variants of NODE methods, such as [2] and [3]. Other studies have also explored the potential of neural networks to learn and solve differential equations. For example, Chen et al. [4] proposed the Symplectic Recurrent Neural Network (SRNN) to capture the dynamics of physical systems from regularly observed data. Raissi et al. [5] introduced physics-informed neural net-The associate editor coordinating the review of this manuscript and approving it for publication was Derek Abbott . works (PINNs) to solve data-driven solutions and data-driven discovery problems of partial differential equations (PDEs). Long et al. [6], [7] proposed PDE-Net, a feed-forward deep network that predicts the dynamics of complex systems and uncovers the underlying hidden PDE models. PDE-Net implemented Symbolic Neural Network (S-Net) to learn the structure of PDEs and demonstrated powerful learning ability. Similarly, [8], [9] also proved the competitive generalization capability of S-Net. These works show the potential of deep learning in solving PDEs and discovering hidden models.
Parameterized ODEs are a class of ODEs with solutions that vary with parameters, representing multiple dynamics specified by input parameter instances. They have been extensively studied in computational science and engineering domains, such as fluid dynamics and the ideal pendulum system. In critical situations, computing high-fidelity solutions of parameterized ODEs is necessary, either for numerous input parameter instances or initial states. Data-driven methods have been used in recent years to estimate the evolution of dynamical systems over time, including different initial conditions [10] and system parameters [11]. To learn the latent dynamics of complex dynamical processes in computational physics, Lee and Parish [11] proposed encoder-decoder parameterized NODEs (PNODEs). Shimizu and Parish [12] presented the windowed space-time least-squares Petrov-Galerkin method (WST-LSPG) for model reduction of nonlinear parameterized dynamical systems. WST-LSPG divides the time simulation into several windows and sequentially minimizes the discrete-in-time residual within its own unique low-dimensional space-time subspace. Lee and Trask [13] introduced POUNODEs, a new variant of NODEs with evolving model parameters. They modeled the evolution using partition-of-unity networks, allowing for greater flexibility in capturing the dynamics of complex systems.
To the best of our knowledge, the majority of existing approaches for solving parameterized ODEs rely on blackbox methods, which employ neural networks directly to simulate its behavior and make predictions. Our proposed approach, on the other hand, aims to gain a deeper understanding of the underlying mechanics of the dynamic system by first learning the structure of the ODEs. This understanding enables us to predict the behavior of the dynamical system more accurately and efficiently for new parameters and initial states compared to black-box methods. The contributions of this work are summarized as follows: • Our proposed framework combines S-Net with an ODE solver to learn parameterized ODEs. This novel approach enables accurate and efficient modeling of the dynamics of complex systems, even when the properties of the governing equations vary across multiple input parameters.
• To evaluate our approach, we compare its performance with that of a baseline model, O-Net, which represents a black-box approach. We empirically demonstrate the advantages of our proposed framework in terms of both accuracy and interpretability.
The remainder of this paper is organized as follows: Section II presents related works. The considered parameterized ODEs problem and the ODE Solver are briefly introduced in Section III. Section IV illustrates the proposed approach. We evaluate the performance of our method on two case studies: the Lotka-Volterra Equation in Section V and the Lorenz Lotka-Volterra Equation in Section VI. Finally, in Section VII, we summarize our findings and discuss potential avenues for future research.

II. RELATED WORK
Several studies have explored the use of Gaussian process regression to develop tailored functional representations for a given linear operator [14], [15], [16]. However, the local linearization of nonlinear terms in time and prior assumptions of Gaussian process regression limit the representation capacity of the model. Sparse regression, discussed in [17], [18], [19], and [20], overcomes this limitation by developing a dictionary of basic functions and partial derivatives that can accurately represent the data using sparsity-promoting techniques. However, the predictive and expressive capabilities of the dictionary are restricted since the sparse regression method necessitates predefining specific numerical approximations for spatial differentiation.
Mesh-based simulations have recently shown significant progress [21], [22], surpassing grid-based convolutional neural networks (CNNs) in terms of runtime and exhibiting greater adaptivity to the simulation domain. While several methods, such as AntisymmetricRNN [23] and the continuous-time Gated Recurrent Unit with a Bayesian update network [24], have leveraged the stability of underlying differential equations to capture long-term dependencies, they did not address the challenge of learning the expression of the equation to gain a deeper understanding of the underlying mechanism behind the observed data. In another work [25], the authors learned the unknown parameters of the ODE system by constructing certain time-related features, but this method did not address the expressiveness of the equation.

III. KNOWLEDGE
In this section, we'll define parameterized ODEs and differentiate them from non-parameterized ODEs. Additionally, we'll introduce the ODE Solver, a crucial component in our approach.
Concerning the parameterized ODEs problem, the observation data is generated by different parameters (α k , β k , γ k ) at some specified time t i on different initial states (s j 0 , p j 0 , r j 0 ). Here, we define K > 0 as the number of parameters, N obs > 0 as the number of observation time points, and M > 0 as the number of initial states.
In the scenario of parameterized ODEs, if ( Parameterized ODEs find applications in various scenarios. For instance, in [26], different parameters associated with the ODEs represent distinct strategies for regulating cancer tumor progression behavior. Given patient data describing the evolution of a tumor, it would be advantageous to obtain the functional form of the ODE system underlying this complex system, both for the state variables and parameters. Subsequently, once we have constructed the functional form of the ODE system using a neural network, we can directly compute solutions of the ODE for new initial states and parameter sets. Traffic flow problems represent another common application. Changes in traffic flow over time at two specific locations x 1 , x 2 in a city can be modeled as In this context, the number of cars at positions x 1 and x 2 are represented by s and p, respectively, and the traffic flow changes over time at these locations are modeled using functions f and g. Here, t ∈ (0, 24] denotes a day for a cycle, and the vector w represents external factors that influence traffic conditions, such as weather, temperature, and so on. We assume that the external factors w change daily, but the expressions for f and g remain constant as the mechanism by which each factor affects traffic flow remains the same. Once we obtain the correct analytical formulas, we can accurately predict the system. This approach is effective for discrete chaotic maps, using the Logistic map as an example. The Logistic map is a straightforward, one-dimensional discrete-time dynamical system characterized by the following equation: Here, x is the state variable at time step n, and γ is a parameter. We can employ the proposed method to learn the expression on the right side of (5), which means that the input of the S-Net consists of both γ and x.
In this paper, we begin with Euler integrator [29], a popular choice due to its simplicity, as demonstrated in previous works [1], [2], [13]. Euler method also facilitates a fair comparison with black-box methods. Given the state z n at time point t n = t 0 + n t, the state at the next time point can be computed using the following formula: The time step size, denoted by t, is a crucial parameter in numerical methods for solving ODEs. Specifically, Euler method can generate unstable solutions for stiff ODE systems unless a very small t is employed, as noted in [30].
To maintain solution stability in our experiments, we meticulously select an appropriate value for t. In Appendix A, we present a rigorous proof of the convergence of Euler method, which is essential for understanding the accuracy of numerical solutions. Our approach can also be adapted to learn unknown functions using semi-implicit solvers with appropriate modifications. In Appendix B, we provide a brief explanation of how to make the necessary adjustments to fit semi-implicit solvers.

IV. OUR APPROACH A. PIPELINE
We use initial states Z 0 = z j 0 |j = 1, . . . , M and parameters W = {(α k , β k , γ k )|k = 1, . . . , K } for the training process. Following the work [1], we let the right side of ODEs be a parametric function F θ (z, α, β, γ), where z = (s, p, r) and θ is the vector of parameters of the neural network. After train- where i = 1, . . . , N obs . Given a series of observations . . , K }}, we estimate the parameter θ by minimizing the error between the observed trajectories Z and predicted trajectoriesẐ , denoted as Inspired by [7], we also add the regularization term L S-Net to the loss function to avoid overfitting and enhance the generalization capability of the model. L S-Net is defined as (9), where l s x 2 , otherwise and s = 0.001.
L data and L S-Net constitute the loss function L, that is Figure 1 provides a detailed illustration of how to generate a trajectory based on the initial state z 0 = (s 0 , p 0 , r 0 ) ∈ Z 0 and parameters (α, β, γ) ∈ W . Herein, we denote s i = s(t i ; z 0 , α, β, γ), p i = p(t i ; z 0 , α, β, γ) and r i = r(t i ; z 0 , α, β, γ) to represent the states of s, p and r at time t i . Algorithm 1 illustrates the pipeline for solving the parameterized ODE problem. We back-propagate the loss and use it to update the parameter θ, which allows us to obtain the best θ * . This value represents the ODE system and enables us to predict the dynamics based on new initial states and parameters. There are three parts for ds dt , dp dt and dr dt respectively after four layers. Each part has two layers: the first layer has 512 units with the tanh activation function, and the second layer outputs the derivatives of ds dt , dp dt and dr dt .

B. ARTIFICIAL NEURAL NETWORKS:O-NET AND S-NET
In Algorithm 1, we can use any form of the neural network, but we adopt the O-Net as our baseline, which is commonly used for regression tasks without exploring the underlying mechanism connecting inputs and outputs. However, to learn the combinations of different variables and parameters, we design S-Net, inspired by [6], [7], [8], and [9]. Unlike O-Net, S-Net first learns the analytic expression of ODEs and then predicts the dynamics. Notably, S-Net has no activation functions, and the most significant difference between O-Net and S-Net is the mapping between units in the layers.

1) O-NET
O-Net uses fully connected layers to learn the mapping from low to high-dimensional space and uses activation functions to learn the nonlinear relationship. Consider an O-Net with four shared layers and two unique layers for ds dt , dp dt and dr dt , as illustrated in Figure 2. To better understand O-Net, which is usually used for regression, we present a mathematical description in Algorithm 2 showing how O-Net is constructed. O-Net is ineffective in solving variable parameters ODEs problems as it fails to capture the role of parameters. Although both parameters and variables are treated as input features, they behave differently. Specifically, at each time point, the input vector is (z, α, β, γ), where z changes over time while (α, β, γ) do not. As a result, it is challenging for α, β, γ to find appropriate weights in a fully connected neural network, making it difficult for O-Net to effectively learn the underlying dynamics of the system.

2) S-NET
S-Net possesses the ability to learn analytical expressions that can generalize to new domains effectively. The primary distinction between O-Net and S-Net lies in their layer-unit mapping, where S-Net is designed to capture the interaction between various variables and parameters efficiently. This is accomplished through two types of transformations: identity and linear combination maps, which enable S-Net to learn the underlying dynamics of the system accurately.
As an example, consider a two-layer S-Net that aims to learn the function f ∈ F in (1), as illustrated in Figure 3. With the appropriate number of layers, S-Net can represent all polynomials of the variables (s, p, r, α, β, γ). Herein, we only use addition and multiplication operators in S-Net. If necessary, we can add more operations to the S-Net to increase the capacity of the network. To better understand S-Net, we present an example in Algorithm 3 showing how S-Net is constructed. Particularly, we illustrate the learning process of αs − βr using S-Net in (11), (12), (13), (14) , and (15).
(δ 2 , ε 2 ) T = w 2 × s, p, r, α, β, γ, αs w 3 × s, p, r, α, β, γ, αs, βr T VOLUME 11, 2023 correspond to the multiplication and division of two elements, respectively. We only implement the multiplication operation in this work. Apart from s, p, r , α, β and γ gotten by the identity map, f 1 will also be input to the second hidden layer. Similar to the first hidden layer, we get the further combination f 2 = δ 2 ε 2 by w 2 and b 2 . Finally, we obtain the analytic expression of function f . It is necessary to enforce the sparsity of S-Net since it helps reduce overfitting and enables more robust predictions.
Our analysis of αs − βr reveals that the sparsity of S-Net is a critical factor. Therefore, we introduced a regularization term into the loss function to promote model sparsity, as discussed in section IV-A.

V. NUMERICAL STUDIES: LOTKA-VOLTERRA EQUATION
The Lotka-Volterra model is frequently used to describe the dynamics of ecological systems where two species interact. [31] shows that cannibalism has both positive and negative effects on the stability of the Lotka-Volterra predator-prey model. It depends on the dynamic behaviors of the original system.
In this section, we consider Lotka-Volterra Equation (16), where different ecological systems have different parameters of α, β, δ and γ and initial states of x 0 and y 0 , The Euler integrator simulates ground truth trajectories in both training and testing stages with the time step t = 0.1, which empirically meets the stability requirements. The training and testing data consist of 150 and 33 trajectories, respectively, each of which starts from random initial state (x 0 , y 0 ) in the interval [0.6, 1.4] and parameters of (α, β, δ, γ) in the interval ([1.0, 2.0], [0.5, 1.5], [2.5, 3.5], [0.5, 1.5]) respectively. There is no overlap between the training and test data.

Algorithm 3 S-Net
Input: s, p, r, α, β, γ ∈ R Output: F The performance of O-Net and S-Net could get improved with a reasonable increase in the number of time points, i.e., N obs , in the training data. We set four group experiments on training data with N obs = 10, 15, 20, and 25. If the number of observation points is too large, it will also hurt the effectiveness of the model. Once there are too many observations, the accumulated error will be too large for one trajectory, which is not conducive to model training. We train S-Net with the L-BFGS optimizer [32] and use maximum iterations of 30000, a batch size of 150. We train O-Net with ADAM optimizer [33] for 5000 epochs using a batch size of 50, and a learning rate of 5e-3. We predict trajectories based on various initial states and parameters on t ∈ (0, 10].

A. RESULTS AND DISCUSSIONS
Our study shows that S-Net is able to effectively recover the ODEs for the unknown variables of x and y, as well as the parameters α, β, δ, and γ. We use the notation M N obs to represent the model trained on training data with N obs time points. The results, summarized in Table 1, demonstrate that we are able to accurately recover the terms of (16). We observe that increasing the number of time points within a specific range results in higher model accuracy. Specifically, we find that when M N obs = 25, the coefficients of αx, βxy, δy, and γxy match those of the true equation. Additionally, we find that the terms not included in (16) have relatively small coefficients.
To evaluate the ability of the models to generate correct trajectories for new initial states and parameters, we feed 33 testing trajectories into the well-trained models and obtained predicted trajectories. Figure 4 shows the results of S-Net and O-Net with N obs = 10, 15, 20, 25 for a specific test example (x 0 , y 0 , α, β, δ, γ) = (0.71468, 1.01860, 1.10023, 1.17939, 2.78173, 1.18328). We find that S-Net can accurately predict the trajectory in the period of t ∈ [0, 80 t] = (0, 8], but its accuracy decreases when t ∈ (8,10]. However, we observe that increasing the number of time points used in the training process improves the prediction of the S-Net of long-time dynamics. Specifically, S-Net trained on M N obs = 25 predicts with higher accuracy for t ∈ (8, 10] than S-Net trained on M N obs = 10. On the other hand, O-Net performs poorly in all four cases, even though it performs well in the early stages  for a short period, its predictions often deviate significantly from the actual trajectory. In some cases, O-Net even predicts trends that are opposite to the actual situation. Therefore, we conclude that S-Net has a stronger generalization ability than O-Net. We define the error between observations Z and predic-tionsẐ as ϵ = ||Z −Ẑ || 2 . The error plots are shown in Figure 5 for two different time points, 20 and 25. In both cases, the error of O-Net is significantly larger than that of S-Net. For example, in Figure 5(c), we can observe that the error of O-Net sharply increases over time and soon reaches 500. In contrast, the error of S-Net increases at a relatively slower rate and reaches a maximum of approximately 100. Specifically, Table 2 shows the error of S-Net and O-Net for different models. The error of O-Net can be five or six times VOLUME 11, 2023  that of S-Net when M N obs is 10 or 25, respectively. Clearly, S-Net performs significantly better than O-Net, suggesting that proper discretization is crucial when the ODE structure is unknown. Appendix C presents the findings of three other cases using different models obtained by M N obs = 10, 15, 20, and 25, respectively.

VI. NUMERICAL STUDIES: LORENZ EQUATION
In this section, we consider Lorenz Equations (17) where different ecological systems have different parameters σ , ρ and β and initial states x 0 , y 0 and z 0 .
Both the training and testing data consist of 150 and 60 trajectories, respectively, simulated by the Euler integrator with a time step of t = 0.01. The initial states and parameters are randomly selected from the intervals [0.8, 1.2], [0, 0.2], [0, 0.3], [5,15], [23,33], and [2,3], respectively, without any overlap between the two sets. We conduct four group experiments on the training data with different numbers of time points: 4, 6, 10, and 20. For training S-Net, we use the L-BFGS optimizer with a maximum of 50,000 iterations and a batch size of 150. For training O-Net, we use the ADAM optimizer with a learning rate of 5e-3, a batch size of 100, and 5,000 epochs. We predict trajectories for various initial states and parameters on the time interval t ∈ (0, 1]. Table 3 presents the capability of the trained S-Net to identify the underlying ODE model, showing the top five terms of coefficient weights recovered by S-Net with certain accuracy. Notably, when M N obs = 20, the coefficients of σ y, σ x, and xρ match those of the true equation with high precision. The coefficients of the remaining four terms, namely xz, y, xy, and βz, deviate only slightly from the true values, with a maximum difference of 0.01. These results demonstrate the effectiveness of S-Net in recovering the underlying ODE model.

A. RESULTS AND DISCUSSIONS
We evaluate the predictive performance of S-Net and O-Net on (17) using 60 testing trajectories and obtain corresponding predictions from the trained models. Figure 6 shows the results of S-Net and O-Net with N obs = 4, 6, 10, 20 for a specific test example (x 0 , y 0 , z 0 , σ, ρ, β) = (0.82841, 0.14785, 0.17099, 12.39551, 32.03720, 2.67205). We predict the trajectories of x, y and z on t ∈ (0, 100 t] = (0, 1] using the learned models. When M N obs = 4, both O-Net and S-Net predictions are very poor. As the number   O-Net significantly. We provide the findings of two other cases using different models obtained by N obs = 4, 6, 10, and 20, respectively, in Appendix D.

VII. CONCLUSION
Exploring parameterized ODEs is a broad area in computational science and engineering. Parameterized ODEs problems are particularly challenging to solve because the solutions to these problems can vary with the parameters used. Black-box methods offer some insights, but their generalization performance is often subpar due to a limited understanding of underlying mechanisms. To address this issue, we propose a novel framework that combines the S-Net with an ODE Solver to learn parameterized ODEs. Unlike the black-box O-Net method that simply fits the observed data, S-Net learns the expressions of the equations based on the observed data. We use the Euler method as the ODE Solver due to its simplicity. Experiments show the S-Net framework surpasses the O-Net method in learning parameterized ODEs, evidenced by superior efficiency and accuracy in tests involving the Lotka-Volterra and Lorenz Equations.
The proposed framework marks a significant advancement in studying parameterized ODEs in computational science and engineering.
However, there are several limitations to address in future research. Firstly, the current version of S-Net only supports basic operators such as addition, subtraction, and multiplication, which limits its application to complex systems. To broaden its applicability, it is necessary to incorporate more advanced operators such as division, trigonometric functions, powers, and fractional calculation operators. Second, enhancing simulation accuracy requires implementing other ODE Solvers. Thirdly, the present approach is not applicable to arbitrary order systems, which present a more complex challenge. The combinatorial relationship between variables and parameters must be known, and the observations are supplied based on the information on variables. An additional step may be required to predict the number of variables or parameters based on the current method. Finally, while S-Net has proven resilient against noisy data in [9], testing with real data is necessary to establish its applicability to diverse domains.

APPENDIX A CONVERGENCE OF THE EULER METHOD
Our subsequent analysis establishes the convergence of the Euler method when solving the initial-value problem of a first-order differential equation, as stated in (18) Here, the unknown function is denoted by y(x), the known function by f (x, y), and the initial data by y 0 .
Theorem 1: Assume the following one-step method corresponding to the initial value problem (18) be the p-order accuracy y n+1 = y n + hφ(x n , y n , h) and the function φ satisfies the Lipschitz condition for y, i.e., |φ(x, y 1 , h) − φ(x, y 2 , h)| ≤ L|y 1 − y 2 |, ∀y 1 , y 2 and y 0 = y(x 0 ), then the one-step method is convergent and y(x n ) − y n = O(h p ).
Since the one-step method is the p-order accuracy, then From the known conditions, we get From (20), we know when h → 0, then |e n | → 0.

APPENDIX B THE SEMI-IMPLICIT SOLVERS
Our approach can be adapted to learn unknown functions utilizing semi-implicit solvers with suitable modifications. For instance, consider the semi-implicit Euler method, which can be employed for a pair of differential equations with the form dx dt = f (t, y), where f and g are unknown functions that we want to learn. The semi-implicit Euler method produces an approximateÂ discreteÂ solution by iterating y n+1 = y n + g(t n , x n ) t, where t is the time step. The difference with the standard Euler method is that the semi-implicit Euler method uses y n+1 in the equation for x n+1 , while the standard Euler method uses y n . Given that the expressions for f and g are represented by S-Nets, S-Net f and S-Net g , it is necessary to provide them with known information. In the semi-implicit Euler method, we use a positive time step to compute x n+1 from y n+1 generated by S-Net g based on starting points x 0 , y 0 , that is In this context, y is not utilized as an observable variable; instead, it serves as a component to generate the observable variable x. With this approach, as the expression for g must be learned through the x variable, we require additional observations on the x variable and no longer need observations on the y variable.

APPENDIX C LOTKA-VOLTERRA
In Figure 8, we present the outcomes of three distinct models obtained by utilizing N obs = 10, 15, 20, and 25, respectively. The rows indicate the prediction results of the different models for three test samples. The first two columns in each row demonstrate the evolution of the two variables x and y for the first test sample (x 0 , y 0 , α, β, δ, γ) =

APPENDIX D LORENZ
In Figure 9, we present the outcomes of two examples utilizing distinct models derived from N obs = 4, 6, 10, and 20, respectively. Each row represents the prediction results of different models on two test samples. The first three columns of each row demonstrate the progression of the three variables x, y, and z for the first test sample (x 0 , y 0 , z 0 , σ, ρ, β) = Petroleum Technology, University of Stavanger, Norway, with a focus on developing novel machine learning algorithms that leverage her expertise in differential equations, both ordinary and partial, and numerical calculation. Her research interests include the intersection of these fields, with a particular emphasis on developing innovative solutions to complex problems in energy and petroleum technology.
STEINAR EVJE received the M.S. and Ph.D. degrees in applied mathematics from the University of Bergen, in 1992 and 1998, respectively. He is currently a Professor in applied and computational mathematics with the Department of Energy and Petroleum Engineering, University of Stavanger. With more than two decades of experience in the field, he has developed a keen interest in the development of mathematical models that can be used to gain insight into fundamental mechanisms for various multiphase transport and reaction processes within fluid mechanics and medical engineering. In addition to his work on mathematical modeling, he has also focused on leveraging the power of data-driven modeling, including machine learning methods to solve problems related to partial differential equations. He has published numerous papers and articles that have helped advance the state of the art in both machine learning and mathematical modeling.
JIAHUI GENG (Graduate Student Member, IEEE) received the B.S. degree from the School of Automation, Southeast University, China, in 2015, and the M.S. degree from the Department of Computer Science, RWTH Aachen University, Germany, in 2018. He is currently pursuing the Ph.D. degree with the Department of Computer Science and Computer Engineering, University of Stavanger, Norway. His research interests include robustness, privacy, and security, as well as the development of blockchain systems and the application of dynamic systems in machine learning. With a passion for exploring the cutting edge of his field, he is committed to making meaningful contributions to the development of robust, secure, and privacy-preserving machine learning systems that can be used to drive innovation and progress in a wide range of industries and applications.