A Deep Collocation Method for the Bending Analysis of Kirchhoff Plate

In this paper, a deep collocation method (DCM) for thin plate bending problems is proposed. This method takes advantage of computational graphs and backpropagation algorithms involved in deep learning. Besides, the proposed DCM is based on a feedforward deep neural network (DNN) and differs from most previous applications of deep learning for mechanical problems. First, batches of randomly distributed collocation points are initially generated inside the domain and along the boundaries. A loss function is built with the aim that the governing partial differential equations (PDEs) of Kirchhoff plate bending problems, and the boundary/initial conditions are minimised at those collocation points. A combination of optimizers is adopted in the backpropagation process to minimize the loss function so as to obtain the optimal hyperparameters. In Kirchhoff plate bending problems, the C1 continuity requirement poses significant difficulties in traditional mesh-based methods. This can be solved by the proposed DCM, which uses a deep neural network to approximate the continuous transversal deflection, and is proved to be suitable to the bending analysis of Kirchhoff plate of various geometries.


Introduction
Thin plates are widely employed as basic structural components in engineering fields [1], which combines light weight, efficient load-carrying capacity, economy with technological effectiveness. Their mechanical behaviours have long been studied by various methods such as finite element method [2,3], boundary element method [4,5], meshfree method [6], isogeometric analysis [7], and numerical manifold method [8][9][10]. The Kirchhoff bending problem is a classical fourth-order problem, its mechanical behaviour is described by fourth-order partial differential equation as it is pretty difficult to construct a shape function to be globally C 1 continuous but piecewise C 2 continuous, namely, H 2 regular, for those mesh-based numerical method. However, according to the universal approximation theorem, see Cybenko [11] and Hornic [12], any continuous function can be approximated arbitrarily well by a feedforward neural network, even with a single hidden layer, which offers a new possibility of analysing Kirchhoff plate bending problems. We will first give a brief introduction of deep learning.
Deep learning was first brought up as a new branch of machine learning in the realm of artificial intelligence in 2006 [13], which uses deep neural networks to learn features of data with high-level of abstractions [14]. The deep neural networks adopt artificial neural network architectures with various hidden layers, which exponentially reduce the computational cost and amount of training data in some applications [15]. The major two desirable traits of deep learning lie in the nonlinear processing in multiple hidden layers in supervised or unsupervised learning [13]. Several types of deep neural networks such as convolutional neural networks (CNN) and recurrent/recursive neural networks (RNN) [16] have been created, which further boost the application of deep learning in image processing [17], object detection [18], speech recognition [19] and many other domains including genomics [20] and even finance [21].
As a matter of fact, artificial neural networks (ANN) which are main tools in deep learning have been around since the 1940's [22] but have not performed well until recently. They only become a major part of machine learning in the last several decades due to strides in computing techniques and explosive growth in date collection and availability, especially the arrival of backpropagation technique and advance in deep neural networks. However, based on the function approximation capabilities of feed forward neural networks, ANN were adopted to solving partial differential equations (PDEs) [23][24][25], which results in a solution that can be described by a closed analyt-ical form. Basically, ANN methods can be suitable for solving PDEs in that they are smooth enough, solutions in analytical forms can be evaluated at arbitrary points in or outside the problem domain. Yadav et al. elaborately introduced the network methods for differential equations [26]. In the past, when neural networks with many hidden layers were tried to solve nonlinear PDEs in order to get a better results, it usually took a long time for training, which is due to a vanishing gradient problem. However, the proposal of pretraining, which sets the initial values of connection weights and biases, with the back propagation algorithm are now proposed to solve this problem efficiently. More recently, with improved theory incorporating unsupervised pre-training, stacks of auto-encoder variants, and deep belief nets, deep learning has become a central and popular hotspot in research and applications.
Also, some researchers studied the application of deep learning in solving PDEs. Mills et al. deployed a deep conventional neural network to solve Schrödinger equation, which directly learned the mapping between potential and energy [27]. E et al. applied deep learning-based numerical methods for high-dimensional parabolic PDEs and back-forward stochastic differential equations, which was proven to be efficient and accurate even for 100-dimensional nonlinear PDEs [28,29]. Also, E and Yu proposed a Deep Ritz method for solving variational problems arising from partial differential equations [30]. Raissi et al. however solves PDEs in a different way and has made a series of contribution to this field. They first applied the probabilistic machine learning in solving linear and nonlinear differential equations using Gaussian Processes and later introduced a data-driven Numerical Gaussian Processes to solve timedependent and nonlinear PDEs, which circumvented the need for spatial discretization [31][32][33]. Later, Raissi et al. [34][35][36] introduced a physical informed neural networks for supervised learning of nonlinear partial differential equations from Burger's equations to Navier-Stokes equations. Two distinct models were tailored for spatiotemporal datasets: continuous time and discrete time models. Raissi later employed a deep learning approach for discovering nonlinear PDEs from noisy observations in space and time with two deep neural networks, one for the representation of nonlineardynamic PDEs and one for a prior on the unknown solution [37]. Raissi applied a deep neural networks in solving coupled forward-backward stochastic differential equations and their corresponding high-dimensional PDEs [38]. Beck et al. [39,40] studied the deep learning in solving stochastic differential equations and Kolmogorov equations, and validated the accuracy and speed proposed method, especially in high dimensions. Nabian and Meidani studied the presentation of high-dimensional random partial differential equations with a feed-forward fully-connected deep neural networks [41,42]. Based on the physics informed deep neural networks, Tartakovsky et al. studied the estimation of parameters and unknown physics in PDE models [43]. Qin et al. applied the deep residual network and observation data to approximate unknown governing differential equations [44]. Sirignano and Spiliopoulos [45] gave a theoretic motivation of using deep neural networks as PDE approximators, which converges as the number of hidden layers tend to infinity. Based on this, a deep Galerkin method was tested to solve PDEs including high-dimensional ones. Berg and Nyström [46] proposed a unified deep neural network approach to approximate solutions to PDEs and then used deep learning to discover PDEs hidden in complex data sets from measurement data [47]. In general, a deep feed-forward neural networks can well-sever as a suitable solution approximators, especially for high-dimensional PDEs with complex domains.
Meanwhile, some researchers study the surrogate of FEM by deep learning, which mainly trains the deep neural networks from datasets obtained from FEM. From work done by Liang et al. [48,49], a machine learning approach was first used to investigate the relationship between geometric features of aorta and FEM-predicted ascending aortic aneurysm rupture risk and then a deep learning was used to estimate the stress distribution of the aorta, which will be beneficial to real-time patient-specific computational simulations. Lee et al. introduced the background information involved in using deep learning for structural engineering [50]. Later, Wang et al. [51] applied deep learning in calculating U* index for the high efficient load paths analysis, with training data obtained from ANSYS results.
However, in this research, we will not confine deep learning application within FEM datasets. Rather, the deflection of Kirchhoff plate is first approximated with deep physical informed feedforward neural networks with hyperbolic tangent activation functions and trained by minimizing loss function related to governing equation of Kirchhoff bending problems and related boundary conditions. The training data for deep neural networks are obtained by randomly distributed collocation points from the physical domain of the plate. And clearly, this deep collocation method is a truly mesh-free method without the need of background grids. In this study, the method is established and applied to enrich deep learning with longstanding developments in engineering mechanics.
The paper is organised as follows: First a brief introduction of Kirchhoff plate bending strong form with typical boundary conditions is given. Then we introduce a basic knowledge of the deep learning technique and algorithms, which be helpful for later application. For numerical analysis, the deep collocation method with varying hidden layers and neurons are adopted for plates with various shapes, boundary and load conditions, hoping to manifest the favourable numerical features such as high accuracy and robustness of the proposed method.

Kirchhoff plate bending
Based on Kirchhoff plate bending theory [1], the relation between lateral deflection w (x, y) of the middle surface (z = 0) and rotations about the x,y-axis can be given Under the coordinate system shown in Figure 1, the displacement field in a thin plate can be expressed as: w (x, y, z) = w (x, y) . ( It is obviously that the transversal deflection of the middle plane of the thin plate can be regard as the field variables of the bending problem of thin plates. The corresponding bending and twist curvatures are the generalized strains: Therefore, the geometric equations of Kirchhoff bending can be expressed as: with L being the differential operator defined as L = − ∂ 2 . Accordingly, the bending and twisting moments, shown in Figure 1 can be obtained as: Here is the bending rigidity, where E and ν are the Young's modulus and Poisson ratio, and h is the thickness of the thin plate. For isotropic thin plate, the constitutive equation can be expressed in Matrix form The shear forces can be obtained in terms of the generalizsed stress components The differential equation for the deflections for thin plate based on Kirchhoff's assumptions can be expressed by transversal deflection as where 4 () = ∂ 4 ∂x 4 + 2 ∂ 4 ∂x 2 ∂y 2 + ∂ 4 ∂y 4 is commonly called biharmonic operator. Consequently, the Kirchhoff plate bending problems can be boiled down to a fourth order PDE problem, which pose difficulty for tradition mesh-based method in constructing a shape function to be H 2 regular. Moreover, the boundary conditions of Kirchhoff plate taken into consideration in this paper can be generally classified into three parts, namely, For clamped edge boundary, Γ 1 : w =w, ∂w ∂n =θ n , w =w,θ n are functions of arc length along this boundary.
For simply supported edge boundary, Γ 2 : w =w, M n =M n ,M n is also a function of arc length along this boundary.
For free boundary conditions, Γ 3 : M n =M n , ∂Mns ∂s + Q n =q, whereq is the load exerted along this boundary.
It should be noted that n, s here refer to the normal and tangent directions along the boundaries.

Deep Collocation Method for solving Kirchhoff plate bending
In this section, we will begin with introducing some preliminaries on deep learning, including the feed forward neural network architectures, some useful algorithms involved in deep learning. Then based on those basis, the formulation of deep collocation method is elucidated.

Feed forward neural network
The basic architecture of a fully connected feedforward neural network is shown in Figure 2, which comprises of multiple layers: input layer, one or more hidden layers and output layer. Each layer consists of one or more nodes called neurons, shown in the Figure 2 by small coloured circles, which is the basic unit of computation. For an interconnected structure, every two neurons in neighbouring layers have a connection, which is represented by a connection weight. Depicted in Figure 2, the weight between neuron k in hidden layer l − 1 and neuron j in hidden layer l is denoted by w l jk . No connection exists among neurons in the same layer as well as in the non-neighbouring layers. Input data, defined from x 1 to x N , flow through this neural network via connections between neurons, starting from input layer, through hidden layer l − 1, l, to output layer, which eventually output data from y 1 to y M . The feedforward neural work defines a mapping F N N : However, it should be noted that the number of neurons on each hidden layers and number of hidden layers can be any number and are invariably determined through a trial and error procedure. It has also been concluded that any continuous function can be approximated with any desired precision by a feed forward with even a single hidden layer [52,53].
On each neuron in the feed forward neural network, a bias is supplied including neurons in the output layer except the neurons in the input layer, which is defined by b l j for bias of neuron j in layer l. Besides, the activation function is defined for output of each neuron in order to introduce a non-linearity into the neural network and make the back-propagation possible where gradients are supplied along with an error to update weights and biases. The activation function in layer l will be denoted by σ here. There are many activation functions can be used such as sigmoids function, hyperbolic tangent function (T anh), Rectified linear units (Relu), and so on. Some suggestions upon the choice of activation function can be referred in [54]. Hence, for the value on each neuron in the hidden layers and output layer adds the weighted sum of values of output values from the previous layer with corresponding connection weights to basis on the neuron. A intermediate quantity for neuron j on hidden layer F orward propagation of activation values Input Layer Hidden Layer l − 1 Hidden Layer l Output Layer Back propagation of errors l is defined as and its output is given by the activation of the above weighted input where y l−1 k is the output from previous layer. So, basically, when Equation 11 is applied to compute y l j , the intermediate quantity a l j was calculated along the way. This quantity turns out to be useful and named here as weighted input to neuron j on hidden layer l. Equation 10 can be written in a compact matrix form, which calculate weighted inputs for all neurons on certain layer efficiently, obtaining: and accordingly, from Equation 12, y l = σ (a), where activation function is applied elementwise. A feedforward network thus defines a function f (x; θ) depending on input data x and parametrised by θ consisting of weights and biases in each layer. The defined function provides an efficient way to approximate unknown field variables.

Backpropagation
Backpropagation (backward propagation) is an important and computationally efficient mathematical tool to compute gradients in deep learning [55]. Essentially, backpropagation is based on recursively applying the chain rule and decides which computations can be run in parallel from computational graphs. In our problem, the governing equation is the fourth order partial derivatives of field variable w (x) approximated by the deep neural networks f (x; θ), so this makes backpropagation a critical role. For the approximation defined by f (x; θ), in order to find the weights and biases, a loss function L (f, w) is defined to be minimised [56]. The backpropagation algorithm for computing the gradient of loss function L (f, w) can be defined as follows [55]: • Input: Input dataset x 1 , ..., x n , prepare activation y 1 for input layer; • Feedforward: For each layer xl = 2, 3, ..., L, compute a l = k W l y l−1 + b l , and σ a l ; • Output error: Compute the error δ L = ∇ y L L σ L (a L ) • Backpropagation error: For each l = L − 1, L − 2, ..., 2, compute δ l = (W l+1 ) T δ l+1 σ l (a l ); • Output: The gradient of the loss function is given by ∂L ∂w l jk = y l−1 k δ l j and ∂L ∂b l j = δ l j . Here, denotes the Hadamard product. Now, there are a lists of deep learning frameworks for us to choose to setup a training. The main two approaches, Pytorch and Tensorflow, however computing derivatives in the computational graphs distinctly. The former inputs a numerical value and then compute the derivatives at this node, while the latter computers the derivatives of a symbolic variable, then store the derivative operations into new nodes added to the graph for later use. Obviously, the latter is more advantageous in computing higher-order derivatives, which can be computed from its extended graph by running backpropagation repeatedly. In this paper, since the fourth-order derivatives of field variables is needed to be computed, the Tensorflow framework is thus adopted for calculation [57].

Formulation of deep collocation method
The formulation of a deep collocation in solving Kirchhoff plate bending problems is introduction in this section. Collocation method is a widely used method seeking numerical solutions for ordinary, partial differential and integral equations [58]. It is a popular method for trajectory optimization in control theory. A set of randomly distributed points (also known as collocation points) is often deployed to represent a desired trajectory that minimizes the loss function while satisfying a set of constraints. The collocation methods tend to be relatively insensitive to instability of system (such as blowing/vanishing gradients with neural networks), then it can be a viable way to train the deep neural networks in this paper [59].
Recalled form Section 2, Equation 8,9, the solving of Kirchhoff plate bending problems can be boiled down to the solving of a fourth order biharmonic equations with the type of boundary constraints. Thus we first discretize the physical domain with collocation points denoted by x Ω = (x 1 , ..., x N Ω ) T . Another set of collocation points are deployed to discretize boundary conditions denoted by x Γ (x 1 , ..., x N Γ ) T . Then the transversal deflection w is approximated with the aforementioned deep feedforward neural network w h (x; θ). A loss function can thus be constructed to find the approximate solution by considering the minimizing of governing equation with boundary conditions approximated by w h (x; θ). The mean squared error loss form is adopted here.
Substituting w h (x Ω ; θ) into Equation 8, we can get: which results in a physical informed deep neural network G (x Ω ; θ).
For boundary conditions illustrated in Section 2, considering all three boundaries, they can also be expressed by the neural network approximation w h (x Γ ; θ) as: On whereM n (x Γ 2 ; θ) can be obtained from Equation 5 by combing w h (x Γ 2 ; θ).
where M ns (x Γ 3 ; θ) can be obtained from Equation 5 and Q n (x Γ 3 ; θ) can be obtained from Equation 7 by combing w h (x Γ 3 ; θ). It should be noted that n, s here refer to the normal and tangent directions along the boundaries. As induced physical informed neural network G (x; θ), M n (x; θ), M ns (x; θ), Q n (x; θ) share the same parameters as w h (x; θ). Considering the generated collocation points in domain and on boundaries, they can all be learned by minimizing the mean square error loss function: with is then a solution to transversal deflection. Our goal becomes to find the a set of parameters θ that the approximated deflection w h (x; θ) minimize the loss L (θ). And if L (θ) is a very small value, then the approximation w h (x; θ) is very closely satisfying governing equations and boundary conditions, namely Then, the solving of thin plate bending problems by deep collocation method can be reduced to an optimization problem. In deep learning Tensorflow/Pytorch framework, there are a variety available optimizers. One of the most widely used optimization method can be gradient descent based method is the Adam optimization algorithm [60], which is also adopted in the numerical study in this paper. Take a descent step at collocation point x i with Adam-based learning rates α i , And then the process in Equation 20 is repeated until convergence criterion is satisfied.

Numerical examples
In this section, several numerical examples on plate bending problems with various shapes and boundary conditions is studied. And for implementation, a combined optimizer suggested by Berg et al. in [46] is adopted using L-BFGS optimizer [61] first and in linear search where BFGS may fail, a Adam optimizer is then applied with a very small learning rate. For all numerical examples, predicted maximum transverse with increasing layers are studied in order to show a convergence of deep collocation method in solving the plate bending problem.

Simply-supported square plate
A simply-supported square plate under a sinusoidal distribution of transverse loading is studied. The distributed load is given by Here, a,b the length of the plate. D denotes the flexural stiffness of the plate and depends on the plate thickness and material properties.The exact solution for this problem is given by Here, w represents the transverse plate deflection. For this numerical example, we first generate 1000 randomly distributed collocation points in the physical domain depicted in Figure 3. And we thoroughly studied the influence of deep neural network with a varying number of hidden layer and neurons on the maximum deflection at the centre of the plate, which is then shown in Table 1. The numerical results are compared with the exact solution. It is clear that the results predicted by more hidden layers are more desirable, especially for neural networks with three hidden layers. To better reflect the deflection vector in the whole physical domain, the contour plot, contour error plot of deflection for increasing hidden layers with 50 neurons are shown in Figure 5,
In Table 1, we employed a varying number of hidden layers from 1 to 4 and in each layer and the number of neurons varies from 20 to 60. We calculated the corresponding maximum transversal deflection at the centre of the square plate. From the L 2 relative error of deflection vector at all predicted points is shown in Figure 4 for each case. And it is very clear for even the neural network with only one single hidden layer with 20 neurons, the results is already very accurate and favourable. For most cases, with increasing neurons and hidden layers, the results converge to the exact solution and the results are very accurate even with a few neurons and a single hidden layer. In Figure 4, all three hidden layer types get very accurate results. Though the single layer with 20 neurons is the most accurate in all three types with 20 neurons, the magnitude of all is 1 × 10 −4 and the other two results are also very accurate. And as the number of hidden layer and neurons increases, the relative error curves become flat and obtain results around exact solutions.
From Figure 5, Figure 6, Figure 7, we can observe that the deflection is accurately predicted by the deep collocation method, which agree well with the exact solutions. And as the hidden layer number increases, the numerical results converge to the exact solutions in the whole square plate. The predicted plate deformation agrees well with the exact deformation. All these lend some credence to the suitable application of this deep learning based method. The advantageous of neural networks with hidden layers is not conspicuously reflected in this numerical example, as the next numerical example shows more clearly.

Clamped square plate
A clamped square plate under a uniformly distributed transverse loading is also analyzed with deep collocation method in this section. There is no available explicit form exact solution for deflection of among the whole plate. And to better illustration the accuracy of this method, the analytical solution obtained by the Galerkin method referred in [62] is adopted as a comparison: (24) For the maximum transversal deflection at the centre of an isotropic square plate, Ritz method gives the maximum deflection at the centre as w max = 0.00133 qa 4 D [62],and Timoshenko and Krieger [63] gave a exact solution w max = 0.00126 qa 4 D . Here, D denotes the flexural stiffness of the plate and depends on the plate thickness and material properties. a,b the length dimension of the plate. 1000 randomly generated collocation points as in Figure 3 are also used to discritize the clamped square plate here.
For this clamped case, a deep feedforward neural network with increasing layers and neurons is also studied in order to validate the convergence of this scheme. First, the maximum central deflection shown in Table 2 is also calculated for changing layers and neurons and are compared with aforementioned Ritz method, Galerkin method and exact solution by Timoshenko. It is demonstrated that our deep collocation method give most agreeable results with the exact solution. However, for neural networks with single hidden layer, the results are not that accurate even with 60 neurons. But as the neuron number increases, the results are indeed more accurate for the neural network with single hidden layer. This can be observed for the other two hidden layer types. Additionally, as the number of hidden layer increases, the results are much more accurate than the single hidden layer neural network results.
The relative error with the analytical solution with different hidden layers and different neurons is shown in Figure 8. Although the magnitude of relative error of deflection for this numerical example is 1 × 10 −4 , this dose not mean that our deep collocation method is not that accurate for this problem. For it is mentioned that the deflection vector as a comparison to calculate the relative error is gained from Galerkin method, and we have gotten from Table 2, that our method gives more accurate maximum deflection than Galerkin method. As hidden layers increase, the two flat relative error curves nearly coincide and converge to the exact solution.

2)
2QHKLGGHQOD\HU 7ZRKLGGHQOD\HUV 7KUHHKLGGHQOD\HUV Finally, to better depict the favourable of our method, the deflection contour, relative error contour and deformed deflection of the middle surface are also listed for the deep neural network with three layers and 50 neurons in Figure 9. It is clear that the deep collocation method yields results agrees well with the analytical solution.

Clamped circular plate
A clamped circular plate with radius R under a uniform load p is studied here. 1000 collocation points shown in Figure 10 are deployed among the circular plate first. Then, we applied deep collocation method to study the deformation of this circular plate. This problem has a exact solution, which can be referred in [63]: with D denotes the flexural stiffness of the plate and depends on the plate thickness and material properties.
The maximum deflection at the central of the circular plate with varying hidden layers and neurons in Table 3 and compared with exact solution. It is obvious that the predicted maximum deflection is very accurate, and as the neuron and hidden number increase, the maximum deflection are more and more close to the exact solution.
The relative error for deflection of clamped circular plate with increasing hidden layers and neurons is shown in Figure 11 in order to show the convergent of this method. From this figure, we can get that the as hidden layer number increases, the relative error curves become flat and converge very well to the exact solution. However, all neural networks perform well with a relative error magnitude of 1 × 10 −4 .  Finally, the deformation contour, deflection error contour, predicted and exact deformation figure are displayed in Figure 12. The deflection of this circular plate agrees well with the exact solution. The accuracy of this collocation method is again shown here, which also illustrates that this deep collocation method can be easily and agreeably applied to simulate deformation of plates of various shapes. 1HXURQVSHUKLGGHQOD\HU 5HODWLYHHUURURIWUDQVYHUVDOGHIOHFWLRQ(x10

Simply-supported square plate on Winkler foundation
The simply-supported square plate resting on Winkler foundation is studied in this section, which assumes that the foundation's reaction p (x, y) can be described by p (x, y) = kw, with k a constant called f oundation modulus. Considering a plate on a continuous Winkler foundation, the governing Equation 8 can be written as The analytical solution for this numerical example is given as [63]: For this numerical example, the arrangement of collocation points are the same as that in Figure 3. For the detail implementation, neural networks with different neurons and deepth are applied in the calculation. Also, maximum deflections shown in Table  4 at the central point in all those cases are first studied in order to unveil the accuracy of the deep collocation method.
Good agreement can be obsevered in this numerical example as well. From Table  4, we can obsevered as hidden layer and neuron number grows, the maximum deflection becomes more accurate and close to the analytical serial solution for even two hidden layers. The relative error shown in Figure 13 better depicts the advantages of deep neural network than shallow wide neural network. And with more hidden layers, with neurons increase, the relative error cure becomes flat and very close to zero, which shows that the deep collocation method with only two hidden layers can well approximate the deflection.
To better illustrate the deflection distribution around the whole plate, deflection contour, deflection error contour, deformation contour on deformed figure are shown in Figure 14 and compared with the analytical solution. It is demonstrated that the proposed method agrees well with the analytical solution.

Conclusions
In this study,study the bending analysis of Kirchhoff plates of various shapes, loads and boundary conditions. The governing equation of this problem is a fourth order partial differential equation (biharmonic equation), which is an important kind of PDEs in engineering mechanics. The proposed deep collocation method is a truly "mesh-free" method, and can be used to approximate any continuous function, which is very suitable for the analysis of thin plate bending problems. The deep collocation method is very simple in implementation, which can be further applied in a wide variety of engineering problems. Moreover, the deep collocation method with randomly distributed collocations and deep neural networks perform very well with a MSE loss function minimized by the combined L-BFGS and Adam optimizer. An accurate result can even be gotten for the single layer and 20 neurons case. However, as the increase of hidden layers and neurons on each layer, most results become more accurate and converge to the exact and analytical solution. For circular plates, this method become extremely efficient and accurate, and accurate results can be obtained with only a few layers and neurons. More importantly, once those deep neural networks are trained, they can be used to evaluate the solution at any desired points with minimal additional computation time.
However, there are still some intriguing issues remained to be studied for the deep neural network based method such as the influence of choosing other neural network types, activation functions, loss function forms, weight/bias initialization, and optimizers on the accuracy and efficiency of this deep collocation method, which will be studied in our future research.