Single-step and multi-step methods for Caputo fractional-order differential equations with arbitrary kernels

: We develop four numerical schemes to solve fractional differential equations involving the Caputo fractional derivative with arbitrary kernels. Firstly, we derive the four numerical schemes, namely, explicit product integration rectangular rule (forward Euler method), implicit product integration rectangular rule (backward Euler method), implicit product integration trapezoidal rule and Adam-type predictor-corrector method. In addition, the error estimation and stability for all four presented schemes are analyzed. To demonstrate the accuracy and effectiveness of the proposed methods, numerical examples are considered for various linear and nonlinear fractional differential equations with different kernels. The results show that theses numerical schemes are feasible in application.


Introduction
Fractional calculus has been as a mathematical theory of interest over three centuries. However, this theory was not initially applied to any real situation. The scientists and engineers in other fields commonly mentioned how the practical knowledge of fractional calculus has been used and how to operate it to the relevant studies. Fractional calculus and its application can be found in many fields such as physics [34,35], neural networks [32], mechanics and dynamic systems [12,38], biology [15,40] and economics [37].
in the sense of Φ-Caputo derivative have never been investigated. On the basis of the above works on fractional derivative with arbitrary kernels, this paper investigates the numerical approaches for the solution of nonlinear fractional differential equations involving Φ-Caputo fractional derivative. In particular, the efficiency of numerical schemes, error and stability analysis are considered.
The paper is set as follows the next section, several preliminary knowledge of fractional derivatives and integrals are presented. The initial value problem involving Φ-Caputo fractional derivative is defined in Section 3. Moreover, we presented four numerical schemes, namely explicit product integration rectangular rule, implicit product integration rectangular rule, implicit product integration trapezoidal rule, and Adams predictor-corrector method to find numerical solutions of the fractional differential equations in sense of the Φ-Caputo fractional derivative. Next, the error estimation of the approximations and stability are obtained in Section 4. In Section 5, the simulation results including numerical convergence order are discussed for four test examples. The approximate solution and the error estimation for the test examples are presented through figures and tables, respectively. Further, a comparative study of these numerical schemes is performed.

Preliminaries
In this section, we will examine basic definitions and theorems, which will be used to declare and verify our essential results. Let Φ be a continuously differentiable function on [t 0 , T ] such that Φ (t) > 0, for all t ∈ [t 0 , T ]. where n − 1 < α < n and n ∈ N.

Numerical schemes
In this work, we study the Φ-Caputo fractional derivative to differential equation as below: is the Φ-Caputo fractional derivative of order α ∈ (0, 1), f : [t 0 , T ] × R → R is a given continuous function and y 0 ∈ R.
By Theorem 2.1, the solution of the problem (3.1) can be written in terms of the integral equation However, the Φ-Caputo fractional-order differential equation (3.1) is difficult to obtain the exact solution. In order to motivate the construction of our numerical methods, the concept of product integration rules [39] can be useed to estimate the integral in (3.2) by the appropriate polynomials.
In order to numerically solve the integral equation (3.2), we consider the approximation solutions y n , n = 1, 2, . . . , N. The uniform grid is divided as Additionally, we assume the approximations y j ≈ y t j (0 ≤ j ≤ n − 1) in the basic step. The piecewise decomposition of (3.2) is defined as

Explicit product integration rectangular rule (Ex. PI Rec.)
To obtain the approximation of y n ≈ y (t n ), the function f (ν, y(ν)) in the integrand of (3.3) is approximated by the (explicit) forward Euler method, which is the constant values f t j , y j . The resulting methods is 3.2. Implicit product integration rectangular rule (Im. PI Rec.) We use a similar step of the explicit product integration rectangular rule to find the resulting method. However, the (implicit) backward Euler method is used to approximate, the function in the integrand of (3.3), which is The resulting methods is where w α,Φ n, j is defined by (3.5).

Implicit product integration trapezoidal rule (Im. PI Trap.)
In this method, we replace the function in each subinterval of the integrand (3.3) by the first-degree polynomial interpolant The implicit product integration trapezoidal rule is

Adams predictor-corrector method (Adams PC)
According to (3.7), Newton's method is necessary for approximating y n . However, we can predict the approximation y n in (3.7) by using (3.4). The approximation in the predictor-step is called y p n . Furthermore, y p n can be used in (3.7). This step is called corrector-step. Overall, this numerical scheme is called Adams predictor-corrector method and is defined by where y p n is the resulting method at predictor-step and y n is the resulting method at corrector-step. In order to fulfill stability and error analysis, some lemmas are mentioned below.
Lemma 3.1. For α ∈ (0, 1) and j = 1, . . . , n − 1, we have Proof. Applying the mean value theorem, we can find In a similar way, the second inequality can be proved.
Lemma 3.2. If α ∈ (0, 1), m ∈ N and β be nonnegative, then This implies that f (y) is a monotone increasing function, and then which completes the proof.
In order to verify the stability and error analysis of our numerical schemes, we present a modified Gronwall's inequality as follows.
where M and C are positive constants.
According to Lemma 3.2, it yields Therefore, Now, we want to show the following result: In effect, we have Applying the property of two gamma functions as follows: Then, we have Therefore, we obtain Thus, the inequality of (3.10) is bounded, which means The proof is completed. Now, we outline the following assumptions on the nonlinearity f .
Lemma 3.4. Assume that (H 1 ) holds. Let y(t) be the solution of problem (3.2) and h > 0 be sufficiently small. Then, there exists a constant C which is independent of h such that Proof. By the assumption (H 1 ), there is a positive constant M such that | f (t, y(t))| ≤ M. From the integral equation (3.2), we get Then, we have In the following, we claim that In the fact that the inequality one immediately gets Combining (3.11) and (3.12), we obtain which completes the proof.
Proof. By the assumption (H 3 ) and Lemma 3.4, which contributes to Lemma 3.6. Assume that (H 1 ) and (H 3 ) hold. Let y(t) be the solution of problem (3.2) and h > 0 be sufficiently small. Then, we have n, j is given by (3.6).
Proof. The proof follows the same argument as in Lemma 3.5. given by (3.7).
Proof. Applying the mean value theorem, there exist ξ j , η j ∈ t j ,t j+1 for j = 0, 1, . . . , n such that This completes the proof.
Then, the stability analysis and error estimations of the explicit product integration rectangular rule (3.4), the implicit product integration rectangular rule (3.6), the implicit product integration trapezoidal rule (3.7), and the Adams predictor-corrector method (3.8) are investigated in the next section. Proof. For each n = 1, 2, . . . , N, we let e n−1 = y (t n−1 ) − y n−1 and e 0 = 0. By the integral equation (3.2) and the fractional left rectangle scheme (3.4), we have the error equation as follows:

Error estimation of the approximation and stability analysis
By the assumption (H 3 ), and applying Lemmas 3.1 and 3.5, we obtain It follows that the inequality (4.1) is obtained by using Lemma 3.3. holds.
Proof. The proof is essentially similar to the proof of Theorem 4.1.  Proof. For each n = 0, 1, . . . , N − 1, we let e n = y (t n ) − y n and e 0 = 0. By the integral equation (3.2) and the Adams predictor-corrector method (3.8), we obtain Then, n, j f t j , y t j − f t j , y j From Lemma 3.7, it follows that By the assumption (H 2 ) of f , we obtain As for u α,Φ n,n , one gets Since f (t, y(t)) = D α,Φ t 0 y(t) is continuous and bounded, it follows Lemma 3.7 that According to Lemmas 3.1 and 3.3, we obtain Proof. Suppose that y 0 and y j ( j = 1, 2, . . . , n) have perturbationsỹ 0 andỹ j , respectively. From (3.4), it follows y n +ỹ n = y 0 +ỹ 0 + 1 n, j+1 f t j , y j +ỹ j .
The proof is completed.
In the same way, we obtain the stability results for the implicit rectangular and trapezoidal rules in Theorems 4.6 and 4.7, respectively. Hence, we omit the proof. Theorem 4.6. Assume that (H 1 ) and (H 2 ) hold. Let y j ( j = 1, 2, . . . , n) be the solution to the implicit product integration rectangular rule (3.6) on the existed interval of its unique solution. Then the implicit product integration rectangular rule (3.6) is conditionally stable.
Theorem 4.7. Assume that (H 1 ) and (H 2 ) hold. Let y j ( j = 1, 2, . . . , n) be the solution to the implicit product integration trapezoidal rule (3.7) on the existed interval of its unique solution. Then the implicit product integration trapezoidal rule (3.7) is conditionally stable.
Next, we investigate the stability of the fractional predictor-corrector scheme (3.8).
Theorem 4.8. Assume that (H 1 ) and (H 2 ) hold. Let y j ( j = 1, 2, . . . , n) be the solution to the Adams predictor-corrector method (3.8) on the existed interval of its unique solution. Then, the Adams predictor-corrector method (3.8) is conditionally stable.
Proof. Assume thatỹ 0 ,ỹ j ( j = 1, 2, . . . , n), andỹ p n (n = 1, . . . , N) are perturbation terms of y 0 , y j , and y p n , respectively. Then, we construct the perturbation equations as follows: n, j+1 f t j , y j +ỹ j − f t j , y j ỹ n =ỹ 0 + 1 n, j f t j , y j +ỹ j − f t j , y j From the assumption (H 2 ) and the inequality (4.4), we get n, j f t j , y j +ỹ j − f t j , y j By Lemmas 3.1 and 3.3, it yields to This completes the proof.

Numerical examples
Motivated by [2], we assume that J = [0, 1] and Φ ∈ C (J) be a differentiable function such that Φ It is clearly seen that y(t) = (Φ(t)) 2 is the exact solution.
The exact solution and numerical solutions of (5.1) are plotted for different kernels Φ in Figure 1. Moreover, the numerical results are closed to the exact solution. Notice that the behaviors of the solutions are similar although we change the different kernels Φ. In Table 1, we display the maximum errors of four numerical schemes for (5.1) when Φ(t) = t and α = 0.8. From the data given in Table 1, the accuracy of numerical solutions corresponds the Theorems 4.1-4.4 when h is sufficiently small. Tables 2 and 3 also present maximum errors for the kernels Φ(t) = sin t 10 and Φ(t) = t 2 , respectively. Overall, the implicit product integration trapezoidal rule and Adams predictor-corrector method have higher accuracy than the other method.    Example 5.2. Consider the nonlinear Φ-Caputo fractional differential equations given by From Figure 2, the numerical and exact solutions of (5.2) are plotted for kernels Φ. Moreover, the numerical solutions are close to the exact solution. However, the behavior of the solutions is changed when the functions of Φ are change.  When α = 0.8 and Φ(t) = t, the maximum errors of four numerical schemes of (5.2) are presented in Table 4. The accuracy of numerical solutions also corresponds to the Theorems 4.1-4.4 when the step size h is getting small. Next, the maximum errors of (5.2) with the different kernels Φ(t) = sin t 10 and Φ(t) = t 2 , respectively, for different values of h are showed in Tables 5 and 6. Therefore, the implicit product integration trapezoidal rule has high accuracy than the other method. However, the implicit product integration rectangular rule gives higher accuracy than the other method when Φ(t) = sin t 10 . Particularly, the four numerical schemes can be reduced to the numerical schemes in [22] when Φ(t) = t. Furthermore, if Φ(t) = ln(t), the numerical schemes are agreed in the example of [26]. It can be seen that y(t) = (Φ(t) − Φ(t 0 )) 2α − (Φ(t) − Φ(t 0 )) 2 .

Conclusions and future work
Four numerical schemes namely explicit product integration rectangular rule, implicit product integration rectangular rule, implicit product integration trapezoidal rule, and Adams method are extended to solve the nonlinear Φ-Caputo fractional differential problems. We also analyze the error estimation and stability for those numerical schemes. When the exact solutions are known as in Examples 5.1 and 5.2, the implicit product integration trapezoidal rule and Adams predictor-corrector method can perform comparatively better than the explicit product integration rectangular and the implicit product integration rectangular rules, where the convergence order depends on the step size h.
Moreover, these schemes are investigated for the linear and nonlinear Φ-Caputo fractional differential equations. Further studies on numerical methods for fractional differential equations based on Φ-Caputo derivative could be investigated for various types of fractional differential equations such as the fractional differential equations with delay and a nonlocal term, integro-differential equations, and higher order fractional differential equations.