Abstract

Recently, the development of neural network method for solving differential equations has made a remarkable progress for solving fractional differential equations. In this paper, a neural network method is employed to solve time-fractional telegraph equation. The loss function containing initial/boundary conditions with adjustable parameters (weights and biases) is constructed. Also, in this paper, a time-fractional telegraph equation was formulated as an optimization problem. Numerical examples with known analytic solutions including numerical results, their graphs, weights, and biases were also discussed to confirm the accuracy of the method used. Also, the graphical and tabular results were analyzed thoroughly. The mean square errors for different choices of neurons and epochs have been presented in tables along with graphical presentations.

1. Introduction

Fractional differential equations can be used to model many real-life problems. Recently, fractional partial differential equations have received much attention of the researchers due to their wide applications in the area of biological sciences and medicine [13]. Moreover, the study conducted in [4, 5] has emphasized on the property of the solution of fractional differential equations like its stability and existence. Particularly, fractional telegraph equations arise in many science and engineering fields such as signal analysis, random walks, wave propagation, electrical transmission line, and so on [6, 7], but they are hard to solve. Accordingly, many methods have been utilized to find solutions of fractional differential equations, for instance, spectral methods [810], finite-element method [11, 12], differential transform method [13], and other methods [14, 15]. Moreover, Hosseini et al. [16] had discussed fractional telegraph equation using radial basis function approach. Furthermore, Zhang and Meerschaert and Tadjeran [17, 18] employed a finite-difference approach for a solution of fractional partial differential equation. Recently, solving fractional differential equations by the neural network method has become an active research area.

A neural network is a type of machine learning algorithm which has amazing ability to solve large-scale problems. It is based on the idea of minimizing the loss function that best approximates the solution to mathematical problems. Nowadays, neural networks are becoming the best solution method to most challenging mathematical problems. The continuously rising success of neural network techniques applied to differential equations (ODEs and PDEs) [19, 20] has stimulated research in solving fractional differential equations with the neural network method. Here, this study focuses on solving fractional telegraph equation with the neural network method. As neural network technology is rising rapidly both in terms of methodological and algorithmic developments, we believe that this is a timely contribution that can benefit researchers across a wide range of scientific domains.

Lagaris et al. [21] solved both ordinary and partial differential equations with neural network approaches. The trial solutions have been given which are fixed to satisfy the boundary conditions. Later on, Piscopo et al. [22] have extended this trial solution by adding the boundary conditions into a loss function. This paper extends this idea to solving time-fractional telegraph equation. To the researchers’ knowledge, there has been a little study on solving fractional partial differential equations with the neural network approach. In [23], fractional diffusion equation with Legendre’s polynomial-based neural network algorithm has been discussed. Pang et al. [24] proposed fractional physics informed neural networks to solve fractional differential equations. The main contribution of this paper is to discuss the artificial neural network algorithm for solving time-fractional telegraph equations. In this paper, we consider a time-fractional telegraph equation [25]:

The rest of this paper is organized as follows. In Section 2, a short review of fractional calculus is presented. In Section 3, the algorithms for solving equation (1) are given. In Section 4, numerical examples are solved to illustrate the effectiveness of the neural network method. Section 5 gives the conclusion.

2. Preliminaries

Definition 1 (see [26]). The function , defined byis called Euler’s gamma function (Euler’s integral of the second kind).

Definition 2 (see [27]). Let .

In other words, is (for ) the usual Lebesgue space.

Definition 3 (see [27]). Let , and the left-sided and right-sided Riemann–Liouville fractional integrals of order for a function on are defined asrespectively. Here denotes Euler’s gamma function.

Definition 4 (see [27]). The left-sided and right-sided Riemann–Liouville fractional derivatives of order , for a function on , are defined byrespectively, where and .

Definition 5 (see [26]). The Caputo fractional derivative of order of a function on is defined byIn particular, when , we have

Definition 6 (see [24]). Grünwald–Letnikov finite-difference schemes: based on the stationary grid , for , the shifted GL finite-difference operator for approximating the fractional Laplacian is defined aswhere step size (s) is shifted to guarantee the stability of the schemes.

With the notation , the first-order, second-order, and third-order GL formulas for approximating the fractional Laplacian are as follows [28]:

3. Solution Method

We constructed an artificial neural network (NN) with one hidden layer and neurons which is shown in Figure 1. The output of the network, , is written aswhere is the activation function, and , are the weights and biases of the network, and a symbol “∗” represents normal scalar multiplication, respectively. and denote hidden and final layers, respectively. Initially, we randomly generated the weights and biases of the network and then we adjusted them in the training processes to minimize the loss function. In short, we find the weights and biases of the neural network that best approximates the solution, , of problem equation (1).

Equation (1) can be written as

A numerical solution to problem (1) is the one which approximately reduces the sum of the mean square of the left-hand side of equation (11), which is similar to the mean square error (MSE) loss function of a neural network. In references [21, 22], the trial solution to ordinary and partial differential equations was presented. Now, we have constructed the trial solution, , for fractional telegraph equation which is the output of the neural network.

If we discretize the domain of the inputs, , into a finite number of training points, say , which is chosen from equally spaced grid, then can be obtained by determining the weights and biases that minimize the loss function of the network on the training points.

Let

The following loss function is used:

The problem is then reduced to minimizing by adjusting the weights and biases in the network, for the given choice of hyperparameters (number of neurons, number of hidden layers, learning rate, momentum, and activation function), i.e.,

To compute the error, we need to calculate the derivatives of the network output, , with respect to its input. Lagaris et al. [21] have presented how to obtain the automatic differentiation of partial derivatives. To obtain the fractional derivatives, the Grünwald–Letnikov (GL) method given in [2830] is used. Then, a simple hybrid of the two derivatives is used.

The optimization can then made via backpropagation by the gradient descent method to find the weights and biases of the network by calculating the gradient of the loss function. The algorithm is implemented in Python 3 all examples were done on Google Colab connected to “Google Compute Engine Backend (GPU),” RAM: 0.80/12.72 GB, Disc: 38.40/68.40 GB).

4. Application

In this section, the neural network method for solving time-fractional telegraph equation is tested using examples.

Example 1. Consider the time-fractional telegraph equation [16]where . The exact solution is . The neural network is trained for 10000, 30000, and 50000 epochs, respectively, with the following hyperparameters: learning rate = 0.001, momentum = 0.99, and domain grid . The exact and approximated solutions, respectively, are shown in Figure 2, and the network’s weights and biases with , neurons = 50, and are given below:

Table 1 presents the variations of mean square error of Example 1 with different epochs, neurons, and . The result indicates that as the values of epochs, neurons, and increase, the mean square error diminishes. Similarly, the approximate and exact solutions are shown in Figure 3 with values of , neurons = 50, and . Also, Figure 4 shows the scatter plot of error function for the same parameters of , neurons = 50, and .

Example 2. Consider the time-fractional telegraph equation [25]where . The exact solution is . The neural network is trained for 50000 epochs, with the following hyperparameters: learning rate = 0.001, momentum = 0.99, and domain grid . The approximated solution is shown in Figure 5, and the network’s weights and biases with and neurons = 30 are given below:

In Figure 6, the left graph shows the scatter plot of the error function and the right graph shows the scatter plot of the exact solution. This graph is plotted for mesh grid points. The graph reflects that the error between the exact and approximate solutions is exactly similar.

5. Conclusion

The novel way to solve time-fractional telegraph equation is proposed. The method extends the existing approach for differential equations (both ODEs and PDEs) [22] to fractional differential equations. This approach is applied to finding time-fractional telegraph equations for which exact solutions are known. Compared to traditional fractional differential equation (FDE) solvers, such as the finite-difference method and spectral method, which can only obtain the approximate solutions on the grid points, the proposed method can approximate the solutions of all points in the interval by training only few sample points. However, a lot of training is needed to get higher accuracy, especially when . This would be interesting work for future direction.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Authors’ Contributions

Wubshet Ibrahim and Lelisa Kebena Bijiga contributed equally to this study.