Next Article in Journal / Special Issue
Almost Optimality of the Orthogonal Super Greedy Algorithm for μ-Coherent Dictionaries
Previous Article in Journal
A New Solution to a Cubic Diophantine Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exponential Multistep Methods for Stiff Delay Differential Equations

School of Mathematics and Statistics, Guangdong University of Technology, Guangzhou 510006, China
*
Author to whom correspondence should be addressed.
Axioms 2022, 11(5), 185; https://doi.org/10.3390/axioms11050185
Submission received: 25 March 2022 / Revised: 15 April 2022 / Accepted: 18 April 2022 / Published: 19 April 2022

Abstract

:
Stiff delay differential equations are frequently utilized in practice, but their numerical simulations are difficult due to the complicated interaction between the stiff and delay terms. At the moment, only a few low-order algorithms offer acceptable convergent and stable features. Exponential integrators are a type of efficient numerical approach for stiff problems that can eliminate the influence of stiffness on the scheme by precisely dealing with the stiff term. This study is concerned with two exponential multistep methods of Adams type for stiff delay differential equations. For semilinear delay differential equations, applying the linear multistep method directly to the integral form of the equation yields the exponential multistep method. It is shown that the proposed k-step method is stiffly convergent of order k. On the other hand, we can follow the strategy of the Rosenbrock method to linearize the equation along the numerical solution in each step. The so-called exponential Rosenbrock multistep method is constructed by applying the exponential multistep method to the transformed form of the semilinear delay differential equation. This method can be easily extended to nonlinear delay differential equations. The main contribution of this study is that we show that the k-step exponential Rosenbrock multistep method is stiffly convergent of order k + 1 within the framework of a strongly continuous semigroup on Banach space. As a result, the methods developed in this study may be utilized to solve abstract stiff delay differential equations and can be served as time matching methods for delay partial differential equations. Numerical experiments are presented to demonstrate the theoretical results.

1. Introduction

Delay differential equations (DDEs) have received substantial attention in recent decades. They are used to model many systems in science and engineering. In these systems, the change of the unknown quantity depends not only on the present state but also on the past state. A typical DDE takes the form of
y ( t ) = F ( t , y ( t ) , y ( t τ ) ) ,
where y ( t τ ) stands for the past effects, and τ is the time lag. Such equations arise in various fields such as biology [1], control theory [2], population dynamics [3], and others [4,5,6,7].
To solve delay differential equations numerically, the intuitive approach is to use the mature method for ordinary differential equations (ODEs). Many numerical methods originally designed for ODEs have been applied to DDEs. The reader is referred to the monograph by Bellen and Zennaro [8] for a comprehensive review.
In the last few years, exponential integrators have been getting a lot of attention in the field of numerical solutions of ODEs. Exponential integrators eliminate the influence of stiffness on the scheme by handling the stiff term precisely. The first two exponential integrators of order two and three based on the Adams–Moulton method have been presented in [9]. Along with improving computational efficiency, considerable effort has been directed at developing and analyzing exponential integrators for semilinear stiff ODEs. Various exponential integrators have been investigated, such as exponential Runge–Kutta methods [10,11], exponential multistep methods [12,13], exponential Rosenbrock methods [14,15,16], exponential Taylor methods [17], and exponential general linear methods [18]. A good exposition of exponential integrators can be found in [19].
However, a large number of numerical findings demonstrate that, when a method designed for ODEs is directly applied to the DDEs, the method’s accuracy and stability properties cannot be maintained. The majority of current theoretical results on the convergence and stability of the classic methods have the following shortcomings. To preserve the stiff convergent order of the original numerical methods for ODEs, the addition of delay terms necessitates that the method fulfills more stiff convergence order conditions. Moreover, the method’s stability conditions get more complicated. More and stronger stability conditions are required to ensure that the scheme meets certain stability requirements. Integration of DDEs requires specifically designed methods other than the stand ODE methods. Furthermore, the stability and convergence properties of numerical methods should be thoroughly investigated.
Analysis of exponential integrators to DDEs has been addressed in the literature. In [20], D–convergence and conditional GDN–the stability of implicit exponential Runge–Kutta methods have been studied for semilinear DDEs. For exponential Runge–Kutta methods, the sufficient conditions of D–convergence and conditional GDN–stability given in [20] imply that the method must be implicit. In [21,22], the stiff convergence and conditional DN–stability of explicit exponential Runge–Kutta methods have been investigated for stiff DDEs. In view of the excellent properties of exponential integrators, the motivation of this work is to develop efficient exponential integrators for DDEs. We aim to construct exponential integrators that have fewer restrictions.
This study is concerned with two exponential multistep methods of Adams type for stiff delay differential equations. For semilinear delay differential equations, applying the linear multistep method directly to the integral form of the equation yields the exponential multistep method. It is shown that the proposed k-step method is stiffly convergent of order k. On the other hand, we can follow the strategy of the Rosenbrock method to linearize the equation along the numerical solution in each step. The so-called exponential Rosenbrock multistep method is constructed by applying the exponential multistep method to the transformed form of the semilinear delay differential equation. This method can be easily extended to nonlinear delay differential equations. The main contribution of this work is that we show that the k-step exponential Rosenbrock multistep method is stiffly convergent of order k + 1 because of the favorable features of the transformed form. The novelty of this study is that the presented methods have fewer restriction conditions than classical linear multistep methods. Moreover, the convergence analysis is carried out within the framework of a strongly continuous semigroup on Banach space. As a result, the methods developed in this study may be utilized to solve abstract stiff delay differential equations and can be served as time matching methods for delay partial differential equations.
The paper is outlined as follows: in Section 2, exponential multistep methods are constructed for semilinear DDEs. It is shown that the proposed k-step method is stiffly convergent of order k. Section 3 is devoted to the exponential Rosenbrock multistep methods. Based on the interpolation with a Hermite node at the beginning of each step, the k-step exponential Rosenbrock multistep method is shown to be stiffly convergent of order k + 1 . Finally, numerical experiments are shown to demonstrate the theoretical findings.

2. Exponential Multistep Methods

In this section, we consider the following semilinear delay differential equation:
y ( t ) = A y ( t ) + g ( t , y ( t ) , y ( t τ ) ) , 0 t T , y ( t ) = ϕ ( t ) , τ t 0 .
This form may arise from the space discretization of some semilinear delay parabolic equations [15]. In this case, y R d is the unknown function, A is a given matrix in R d × d , and g is a nonlinear continuous function. The methods presented in this paper are also applicable to abstract delay differential equations in certain Banach space X with norm · . In this instance, y X and A is a linear operator (can be unbounded) from X to X.

2.1. Construction of Exponential Multistep Methods

We divide the interval [ 0 , T ] by time step size h and denote t n = n h ; then, for a given integer s, applying the variation-of-constants formula to Equation (1) yields
y ( t n + s ) = e s h A y ( t n ) + 0 s h e ( s h θ ) A g ( t n + θ , y ( t n + θ ) , y ( t n + θ τ ) d θ .
This is the starting point of the exponential multistep methods. Let y n denote the approximation of exact solution y ( t ) at time t = t n . Similarly, y n τ is the numerical solution at t = t n τ . Exponential multistep methods are constructed by treating the integral part in (2) numerically.
We approximate the nonlinear term g in Equation (2) by the Lagrange interpolation polynomial p n through the points
( t n k + 1 , G n k + 1 ) , ( t n k + 2 , G n k + 2 ) , , ( t n , G n ) .
Here, we set G i = g ( t i , y i , y i τ ) for simplicity. According to the Newton backward interpolation formula, the polynomial p n can be expressed as follows:
p n ( t n + θ h ) = j = 0 k 1 ( 1 ) j θ j j G n .
Here, j G n denotes the jth backward difference, defined recursively by
0 G n = G n , j G n = j 1 G n j 1 G n 1 , j = 1 , 2 , .
Note that the delay terms y n τ are involved in the interpolation data G n . For t n τ 0 , we can simply set y n τ by the initial function, i.e., y n τ = ϕ ( t n τ ) . When t n τ > 0 , we approximate it by the Lagrange interpolation polynomial q n τ through points
( t n m v , y n m v ) , ( t n m v + 1 , y n m v + 1 ) , , ( t n m + r , y n m + r ) .
Here, integer m satisfies τ = m h δ h , 0 δ < 1 . r m is required to ensure that no unknown values y k are used. The Lagrange interpolation polynomial q n τ can be expressed as follows:
q n τ ( t n τ ) = = v r L ( δ ) y n m + , L ( δ ) = j = v j r δ j j .
We now have all of the pieces necessary to construct the exponential multistep methods. This paper focuses on multistep methods of Adams type; hence, we set s = 1 in (2). The numerical method is obtained by replacing the nonlinearity g in (2) by the interpolation polynomial p n . Therefore, we can write
y n + 1 = e h A y n + h 0 1 e ( 1 θ ) h A p n ( t n + θ h ) d θ ;
then, we have the k-step exponential multistep method for semilinear DDEs (1)
y n + 1 = e h A y n + h j = 0 k 1 β j ( h A ) j G n .
The weights β j are defined as
β j ( h A ) = ( 1 ) j 0 1 e ( 1 θ ) h A θ j d θ , j 0 .
Remark 1.
We remark that the proposed method can be viewed as a small perturbation of the exponential Euler method, which is given by
y n + 1 = e h A y n + β 0 ( h A ) G n .
The weights β j given in (5) can be expressed as the linear combination of φ -functions, which are defined as follows:
Definition 1.
For any complex numbers z C , let φ 0 ( z ) = e z . For all integers j 1 , define φ j ( z ) as
φ j ( z ) = 0 1 e ( 1 θ ) z θ j 1 ( j 1 ) ! d θ .
Note that φ -functions enjoy the following recurrence relation:
φ j + 1 ( z ) = φ j ( z ) 1 j ! z = φ j ( z ) φ j ( 0 ) z , for j 0 .
By introducing the φ -functions, the weights of the k-step exponential multistep methods up to k = 7 are given by
β 0 ( z ) = φ 1 ( z ) , β 1 ( z ) = φ 2 ( z ) , β 2 ( z ) = φ 3 ( z ) + 1 2 φ 2 ( z ) , β 3 ( z ) = φ 4 ( z ) + φ 3 ( z ) + 1 3 φ 2 ( z ) , β 4 ( z ) = φ 5 ( z ) + 3 2 φ 4 ( z ) + 11 12 φ 3 ( z ) + 1 4 φ 2 ( z ) , β 5 ( z ) = φ 6 ( z ) + 2 φ 5 ( z ) + 7 4 φ 4 ( z ) + 5 6 φ 3 ( z ) + 1 5 φ 2 ( z ) , β 6 ( z ) = φ 7 ( z ) + 5 2 φ 6 ( z ) + 17 6 φ 5 ( z ) + 15 8 φ 4 ( z ) + 137 180 φ 3 ( z ) + 1 6 φ 2 ( z ) .
Remark 2.
It is obvious that efficiently computing the φ-functions is critical for the implementation of exponential multistep methods. Several efforts have been made on this line; see [23,24,25,26]. For large-scale problems, it is more efficient to compute the product of a matrix exponential function with a vector directly without the explicit form of the matrix exponential.
Remark 3.
For multistep methods, the starting values y 1 , y 2 , , y k 1 are unknown. Normally, they are obtained by one-step methods. Here, we take the method proposed in [12]. The details are listed in Appendix A.

2.2. Stiff Convergence of Exponential Multistep Methods

In this subsection, we investigate the stiff convergence of the k-step exponential multistep method (4). Stiff convergence refers to the error bounds being of form C h p , where the constant C is independent of the stiffness and the step size h. The analysis is carried out under the framework of semigroup. The following assumptions on the semilinear DDEs (1) are suggested.
Assumption 1.
Let X be a Banach space with norm · . The solution y ( t ) of Equation (1) is sufficiently smooth. The linear operator A : X X is the infinitesimal generator of a strongly continuous semigroup e t A on X. The nonlinearity g : [ 0 , T ] × X × X X is Frechet differentiable in a strip along the exact solution y ( t ) . All occurring derivatives are supposed to be uniformly bounded.
Under this assumption, there exists a Lipschitz constant L such that
g ( t , μ 1 , ν 1 ) g ( t , μ 2 , ν 2 ) L ( μ 1 μ 2 + ν 1 ν 2 )
holds in a neighborhood of the exact solution. Moreover, the following parabolic smoothing property holds:
e t A + A e t A C .
For the φ - functions defined by (6), we have
φ j ( h A ) 0 1 e ( 1 τ ) h A τ j 1 ( j 1 ) ! d τ C j ! .
It follows that the weights β j ( h A ) are bounded operators.
The main result is the following.
Theorem 1.
Apply the k - step exponential multistep method (4) to the semilinear DDE (1) satisfying Assumption 1. If the Lagrange interpolation (3) satisfies r + v = k 1 and the starting values satisfy
y j y ( t j ) c 0 h k , j = 1 , , k 1 ,
then the error bound
y n y ( t n ) C h k sup 0 t t n d k d t k g ( t , y ( t ) , y ( t τ ) )
holds uniformly for n such that 0 n h T . The constant C depends on T, but not on A , n, and h.
Proof. 
Let e n = y n y ( t n ) denote the error of the method at time t n . It is easy to see that e 0 = 0 . The first step is to build the error equation for e n .
Note that the exact solution y ( t n ) of (1) has the similar form as the numerical solution y n . Indeed, according to the variation-of-constants formula, the solution of (1) has the form
y ( t n + 1 ) = e h A y ( t n ) + h 0 1 e h ( 1 θ ) A f ( t n + θ h ) d θ
with f ( t ) = g ( t , y ( t ) , y ( t τ ) ) for simplicity. Denote p ˜ n as the interpolation polynomial through the exact data
( t n k + 1 , f ( t n k + 1 ) ) , ( t n k , f ( t n k ) ) , , ( t n , f ( t n ) ) .
It easily follows that
p ˜ n ( t n + θ h ) = j = 0 k 1 ( 1 ) j θ j j f ( t n ) ,
and the interpolation error is
f ( t n + θ h ) p ˜ n ( t n + θ h ) = h k ( 1 ) k θ k f ( k ) ( ζ ( θ ) ) .
Here, ζ ( θ ) is a certain intermediate time in the interval [ t n k + 1 , t n + 1 ] . Inserting (11) into (10), we obtain
y ( t n + 1 ) = e h A y ( t n ) + h 0 1 e h ( 1 θ ) A p ˜ n ( t n + θ h ) d θ + δ n + 1 .
Here, the defect δ n + 1 is
δ n + 1 = h 0 1 e h ( 1 θ ) A f ( t n + θ h ) p ˜ n ( t n + θ h ) d θ .
Subtracting (4) from (12) gives the error recursion
e n + 1 = e h A e n + h 0 1 e h ( 1 θ ) A ( p n ( t n + θ h ) p ˜ n ( t n + θ h ) ) d θ δ n + 1 .
Solving this recursion with e 0 = 0 , we find that
e n = h j = 0 n 1 e ( n j 1 ) h A 0 1 e h ( 1 θ ) A ( p j ( t j + θ h ) p ˜ j ( t j + θ h ) ) d θ 1 h δ j + 1 .
We will now show that the error e n indeed satisfies the bound (9).
First, for the defect δ n + 1 , we infer from (8) and (11) that
δ n + 1 C h k + 1 M , M = sup 0 t t n + 1 f ( k ) ( t ) .
Next, we have to estimate the term p j ( t j + θ h ) p ˜ j ( t j + θ h ) . Note that functions p j and p ˜ j contain the information of the delay terms. To this end, let q ˜ n τ denote the interpolation polynomial through the exact history data
( t n m v , y ( t n m v ) ) , ( t n m v + 1 , y ( t n m v + 1 ) ) , , ( t n m + r , y ( t n m + r ) ) .
Then, there exists a polynomial q ( δ ) of degree r + v + 1 such that
n τ = y ( t n τ ) q ˜ n τ ( t n τ ) = h r + v + 1 q ( δ ) y ( r + v + 1 ) ( ζ ( θ ) ) ,
where ζ ( θ ) [ t n m v , t n m + r ] . Similarly, in the case of δ n + 1 , we see that
n τ C M ˜ h r + v + 1 , M ˜ = sup 0 t t n + 1 y ( r + v + 1 ) ( t ) .
Now, we are in the position to estimate the error p j ( t j + θ h ) p ˜ j ( t j + θ h ) .
p j ( t j + θ h ) p ˜ j ( t j + θ h ) C = j k + 1 j g ( t , y , y τ ) g ( t , y ( t ) , y ( t τ ) C L = j k + 1 j y y ( t ) + q τ ( t τ ) q ˜ τ ( t τ ) + τ C L = j k + 1 j e + i = ν r e m + i + M ˜ h r + v + 1 .
Combining the above estimation with (13) yields
e n C max j = 1 , 2 , , k 1 e j + C h j = 0 n 1 ( e j + 1 h δ j + M ˜ h r + v + 1 ) .
Here, the bound (8) is involved. Using the discrete Gronwall’s lemma, we have
e n C max j = 1 , 2 , , k 1 e j + C j = 0 n 1 δ j + C M ˜ h r + v + 1 C max j = 1 , 2 , , k 1 e j + C max { M , M ˜ } h min { k , r + v + 1 } .
The assertion is obtained by setting r + v = k 1 . □

3. Exponential Rosenbrock Multistep Methods

In the last section, we designed the exponential multistep methods for semilinear DDEs and investigated their stiff convergence. However, in some instances, the numerical results indicate that this treatment may cause certain issues. Especially when the initial state is far from the equilibrium point, the scheme must take a very tiny time step to make sure it stays stable. This reduces the method’s computational efficiency.
In this section, we follow the strategy of the Rosenbrock method [14] to linearize the equation along the numerical solution in each step. The so-called exponential Rosenbrock multistep method is constructed by applying the exponential multistep method to the transformed form of the delay differential equation. This method can be easily extended to nonlinear delay differential equations. Furthermore, we can show that the k-step exponential Rosenbrock multistep method is stiffly convergent of order k + 1 because of the favorable features of the transformed form.

3.1. Construction of Exponential Rosenbrock Multistep Methods

To advance the semilinear DDE (1) from t n to t n + 1 , we perform the following transformation based on the state ( t n , y n , y n τ ) . Let d, J, and J τ be the Jacobians of the right-hand side of Equation (1) with respect to the first term t, the second term y, and the third term y τ , respectively, i.e.,
d ( t , μ , ν ) = g t ( t , μ , ν ) , J ( t , μ , ν ) = A + g y ( t , μ , ν ) , J τ ( t , μ , ν ) = g y τ ( t , μ , ν ) .
We evaluate at state ( t n , y n , y n τ ) and denote by
d n = d ( t n , y n , y n τ ) , J n = J ( t n , y n , y n τ ) , J n τ = J τ ( t n , y n , y n τ ) .
Then, we rewrite the semilinear DDE (1) as
y ( t ) = J n y ( t ) + d n t + J n τ y ( t τ ) + r n ( t , y ( t ) , y ( t τ ) ) , 0 t T , y ( t ) = ϕ ( t ) , τ t 0 .
Here, the reminder r n is given by
r n ( t , y ( t ) , y ( t τ ) ) = A y ( t ) + g ( t , y ( t ) , y ( t τ ) ) J n y ( t ) d n t J n τ y ( t τ ) .
It enjoys
t r n ( t n , y n , y n τ ) = 0 , y r n ( t n , y n , y n τ ) = 0 , y τ r n ( t n , y n , y n τ ) = 0 .
As will be indicated in Section 3.2, these properties are critical in deriving the convergence order of the exponential Rosenbrock multistep method.
Remark 4.
It appears that the semilinear DDE has been changed into a more complicated form. Nevertheless, the Lipschitz constant of the remainder r n in (14) is smaller than that of the nonlinear term g in (1). The numerical results presented in Section 4 indicate that the exponential multistep method based on (14) is more accurate than that based on the original form (1). Indeed, we will show that the k-step exponential Rosenbrock multistep method is stiffly convergent of order k + 1 . The order of this method is higher than that of the exponential multistep method.
We now construct the exponential Rosenbrock multistep method for semilinear DDE (1) based on the form (14).
The exact solution of (14) can be expressed by the following variation-of-constants formula:
y ( t n + 1 ) = e h J n y ( t n ) + h 0 1 e ( 1 θ ) h J n [ ( t n + θ h ) d n + J n τ y ( t n + θ h τ )     + r n ( t n + θ h , y ( t n + θ h ) , y ( t n + θ h τ ) ] d θ .
With the φ -functions defined by (6), we arrive at
y ( t n + 1 ) = e h J n y ( t n ) + h φ 1 ( h J n ) t n d n + h 2 φ 2 ( h J n ) d n + h 0 1 e ( 1 θ ) h J n J n τ y ( t n + θ h τ ) d θ + h 0 1 e ( 1 θ ) h J n r n ( t n + θ h , y ( t n + θ h ) , y ( t n + θ h τ ) d θ .
The exponential Rosenbrock multistep method is obtained by replacing y ( t n + θ h τ ) and r n ( t n + θ h , y ( t n + θ h ) , y ( t n + θ h τ ) with suitable approximations.
For y ( t n + θ h τ ) , we use the interpolation polynomial η ^ n through points
( t n τ , y n τ ) , ( t n 1 , τ , y n 1 , τ ) , , ( t n k , τ , y n k , τ ) .
and evaluate it at t n + θ h τ . That is,
y ( t n + θ h τ ) η ^ n ( t n + θ h τ ) = j = 0 k ( 1 ) j θ j j y n τ .
For the reminder term r n , we approximate it by the interpolation polynomial p ^ n through points
( t n k + 1 , r n ( t n k + 1 , y n k + 1 , y n k + 1 , τ ) ) , , ( t n , r n ( t n , y n , y n τ ) ) .
Noting the property (15), for the given data, we have r n ( t n ) = 0 . Hence, we further require that p ^ n ( t n ) = 0 . In other words, we employ an incomplete Hermite interpolation to approximate r n . This is the source for the k + 1 order convergent of a k-step exponential Rosenbrock multistep method. Denoting R ^ i n = r n ( t i , y i , y i τ ) for simplicity, the Hermite interpolation polynomial p ^ n reads
p ^ n ( t n + θ h ) = R ^ n n + j = 1 k 1 ( 1 ) j θ j k j ( 1 ) k θ k j R ^ n n .
Replacing the delay term y ( t n + θ h τ ) and the reminder term r n in (16) by polynomials η ^ n and p ^ n , respectively, gives
y n + 1 = e h J n y n + h φ 1 ( h J n ) t n d n + h 2 φ 2 ( h J n ) d n + h j = 1 k β j ( h J n ) J n τ j y n τ + h φ 1 ( h J n ) R ^ n n + h j = 1 k 1 β j ( h J n ) k j β k ( h J n ) j R ^ n n .
Here, φ j ( z ) and β j ( z ) are defined by (6) and (7).
This is the exponential Rosenbrock multistep method for semilinear DDE (1). The procedure for determining the stating values y 1 , y 2 , , y k 1 are sketched in Appendix B.
Remark 5.
The exponential multistep method can be easily extended to a nonlinear delay differential equation
y ( t ) = F ( t , y ( t ) , y ( t τ ) ) , 0 t T , y ( t ) = ϕ ( t ) , τ t 0 .
The exponential Rosenbrock multistep method is based on the transformed form
y ( t ) = F y y ( t ) + F t t + F y τ y ( t τ ) + r n ( t , y ( t ) , y ( t τ ) ) , 0 t T , y ( t ) = ϕ ( t ) , τ t 0
with the corresponding reminder r n . All of the derivatives take values at state ( t n , y n , y n τ ) .

3.2. Stiff Convergence of Exponential Rosenbrock Multistep Methods

This subsection is devoted to investigating the stiff convergence of the exponential Rosenbrock multistep methods for semilinear DDEs (1).
Under Assumption (1), these Jacobians satisfy the Lipschitz condition:
d ( t , μ 1 , ν 1 ) d ( t , μ 2 , ν 2 ) L ( μ 1 μ 2 + ν 1 ν 2 ) , J ( t , μ 1 , ν 1 ) J ( t , μ 2 , ν 2 ) L ( μ 1 μ 2 + ν 1 ν 2 ) , J τ ( t , μ 1 , ν 1 ) J τ ( t , μ 2 , ν 2 ) L ( μ 1 μ 2 + ν 1 ν 2 ) .
Note that the nonlinear operator J can be viewed as a perturbation of g. We infer from Assumption (1) and the perturbation result in [27] that J generates a strongly continuous semigroup e t J . Hence, there exist constants C , ω , such that
e t J n C e ω t , J n e t J n C
hold uniformly in a neighborhood of the exact solution. It further implies that φ j ( h J n ) and β j ( h J n ) are bounded operators. Indeed, we have
φ j ( h J n ) 0 1 e ( 1 τ ) h J n τ j 1 ( j 1 ) ! d τ C max { 1 , e ω h } j ! .
We are now in a position to state our main result on the convergence of the exponential Rosenbrock multistep methods. In contrast to the case of exponential multistep methods, the Jacobians vary with each step. Hence, the proof is more involved.
Theorem 2.
Apply the k - step exponential Rosenbrock multistep method (17) to the semilinear DDE (1) satisfying Assumption 1. If the Lagrange interpolation (3) satisfies r + v = k and the starting values satisfy
y j y ( t j ) c 1 h k + 1 , j = 1 , , k 1 ,
then the error bound
y n y ( t n ) C h k + 1 sup 0 t t n f ( k + 1 ) ( t ) + y k + 1 ( t )
holds uniformly for n such that 0 n h T . The constant C depends on T, but not on A , n and h.
Proof. 
Let e n = y n y ( t n ) denote the error of the method at time t n .
The semilinear DDE (1) are recast into the form of (14) at t = t n . Then, the variation-of-constants formula allows us to write the solution of (14) as
y ( t n + 1 ) = e h J n y ( t n ) + h φ 1 ( h J n ) t n d n + h 2 φ 2 ( h J n ) d n + h 0 1 e ( 1 θ ) h J n J n τ y ( t n + θ h τ ) d θ + h 0 1 e ( 1 θ ) h J n f n ( t n + θ h ) d θ
with f n ( t ) = r n ( t , y ( t ) , y ( t τ ) ) for simplicity.
For f n ( t ) , we consider the Hermite interpolation polynomial p ¯ n ( t ) through the exact data
( t n k + 1 , f n ( t n k + 1 ) ) , ( t n k + 2 , f n ( t n k + 2 ) ) , , ( t n , f n ( t n ) )
and satisfies p ¯ n ( t n ) = f n ( t n ) . Then, the interpolation error can given by
f n ( t n + θ h ) p ¯ n ( t n + θ h ) = h k + 1 ( 1 ) k θ θ k f n ( k + 1 ) ( ζ ( θ ) ) k + 1
Here, ζ ( θ ) is a certain intermediate time in the interval [ t n k + 1 , t n ] .
On the other hand, for delay term y ( t n + θ h τ ) , we introduce the interpolation polynomial η ¯ n based on the exact data
( t n τ , y ( t n τ ) ) , ( t n 1 , τ , y ( t n 1 , τ ) ) , , ( t n k , τ , y ( t n k , τ ) ) .
In this case, we have the interpolation error
y ( t n + θ h τ ) η ¯ n ( t n + θ h τ ) = h k + 1 ( 1 ) k + 1 θ k + 1 y ( k + 1 ) ( ξ ( θ ) )
for certain intermediate time ξ ( θ ) [ t n k , τ , t n τ ] .
Replacing the nonlinearity term f n ( t ) and the delay term y ( t n + θ h τ ) in (19) by p ¯ n and η ¯ n respectively yields
y ( t n + 1 ) = e h J n y ( t n ) + h φ 1 ( h J n ) t n d n + h 2 φ 2 ( h J n ) d n + h 0 1 e ( 1 θ ) h J n J n τ η ¯ n ( t n + θ h τ ) d θ + h 0 1 e ( 1 θ ) h J n p ¯ n ( t n + θ h ) ] d θ + ρ n + 1 + σ n + 1 .
This means that the exact solution y ( t n + 1 ) can be expressed in a similar form as the numerical solution. Here, the defects ρ n + 1 and σ n + 1 are
ρ n + 1 = h 0 1 e h ( 1 θ ) J n J n τ y ( t n + θ h τ ) η ¯ n ( t n + θ h τ ) d θ , σ n + 1 = h 0 1 e h ( 1 θ ) A f ( t n + θ h ) p ¯ n ( t n + θ h ) d θ .
Thus, we have the error recursion by taking the difference between (17) and (20):
e n + 1 = e h J n e n + h 0 1 e h ( 1 θ ) J n J n τ ( η ^ n ( t n + θ h τ ) η ¯ n ( t n + θ h τ ) ) d θ + h 0 1 e h ( 1 θ ) J n ( p ^ n ( t n + θ h τ ) p ¯ n ( t n + θ h τ ) ) d θ ρ n + 1 σ n + 1 .
Solving this recursion and using e 0 = 0 gives
e n = h j = 0 n 1 e ( n j 1 ) h A 0 1 e h ( 1 θ ) J j J j τ ( η ^ j ( t j + θ h τ ) η ¯ j ( t j + θ h τ ) ) d θ + 0 1 e h ( 1 θ ) J j ( p ^ j ( t j + θ h ) p ¯ j ( t j + θ h ) ) d θ 1 h ρ j + 1 1 h σ j + 1 .
Under Assumption 1, it follows from (18) that defects ρ n + 1 and σ n + 1 are bounded by
ρ n + 1 C h k + 2 M ¯ , σ n + 1 C h k + 2 M ¯
with
M ¯ = sup 0 t t n + 1 f ( k + 1 ) ( t ) + C y ( k + 1 ) ( t ) .
To bound the terms η ^ j ( t j + θ h τ ) η ¯ j ( t j + θ h τ ) and p ^ j ( t j + θ h ) p ¯ j ( t j + θ h ) in error Equation (21), we proceed in the same manner as for the term p j ( t j + θ h ) p ˜ j ( t j + θ h ) in Theorem 1. In fact, we have
η ^ j ( t j + θ h τ ) η ¯ j ( t j + θ h τ ) C ( k + 1 ) = k v r e j m + C M ¯ ( k + 1 ) h r + v + 1
and
  p ^ j ( t j + θ h ) p ¯ j ( t j + θ h ) C L = j k + 1 j e + i = ν r e m + i + C ( k + 1 ) L M ¯ h r + v + 1 .
Here, we have taken r + v = k .
Gathering the above bounds with (22), we infer from (18) that
e n C max j = 1 , 2 , , k 1 e j + C ( k + 1 ) h j = 0 n 1 ( e j + C M ¯ h k + 1 ) .
The result follows now from the application of the discrete Gronwall’s lemma. This completes the proof. □

4. Numerical Experiments

In this section, we present some experiments to demonstrate the theoretical results.
Example 1.
We consider the following semilinear delay reaction diffusion equation on interval x [ 0 , 1 ]
u ( x , t ) t = D 2 u x 2 ( x , t ) σ u ( x , t ) ( 1 + a u ( x , t ) + b u ( x , t ) 2 + c u ( x , t 0.1 ) + f 1 ( x , t ) , t [ 0 , 10 ] , u ( 0 , t ) = u ( 1 , t ) = 0 , t [ 0 , 10 ] , u ( x , t ) = x ( 1 x ) e t , t [ 0.1 , 0 ] ,
where D is the diffusion coefficient, σ, a, b, and c are positive constants. The added term f 1 ( x , t ) is specified so that the exact solution is u ( x , t ) = x ( 1 x ) e t .
The space derivative is approximated by the standard second order central difference method with the mesh size of 0.01 . For time discretization, the exponential multistep method (4) and the exponential Rosenbrock multistep method (17) are employed. The relative L 2 errors of the proposed methods are displayed in Figure 1 and Figure 2. We conclude that the k-step exponential multistep method gives a kth order of accuracy, while the k-step exponential Rosenbrock multistep method provides an accuracy of order k + 1 . Moreover, the error of the exponential Rosenbrock multistep method is smaller than that of the exponential multistep method when the step size is the same.
To demonstrate the superiority of the proposed exponential methods over the classical methods, we measure the computing times of different methods. Adams linear multistep methods (see, e.g., [28]), exponential multistep methods, and exponential Rosenbrock multistep methods are compared in terms of computational times. The CPU times in seconds when all of the methods achieve a common error tolerance 10 8 are recorded in Table 1. The results reported here show that exponential methods are more efficient than linear multistep methods. Furthermore, the advantages of high order methods over low order methods are proven clearly.
Example 2.
Next, we consider the following reaction diffusion system with delay
u 1 ( x , t ) t = D 1 2 u 1 x 2 ( x , t ) + r 1 u 1 ( x , t ) ( 1 a 1 u 1 ( x , t 1 ) + b 1 u 2 ( x , t 1 ) ) + f 1 ( x , t ) , u 2 ( x , t ) t = D 2 2 u 2 x 2 ( x , t ) + r 2 u 2 ( x , t ) ( 1 + b 2 u 1 ( x , t 1 ) a 2 u 2 ( x , t 1 ) ) + f 2 ( x , t ) .
We solve this problem on [ 0 , 1 ] × [ 1 , 15 ] with initial conditions
u 1 ( x , t ) = x ( 1 x ) sin 2 ( t ) , 1 t 0 , u 2 ( x , t ) = x ( 1 x ) cos 2 ( t ) , 1 t 0 .
The numerical solutions are obtained by the methods used in Example 1. The relative L 2 errors are presented in Figure 3 and Figure 4. Clearly, all the methods achieve their theoretical convergence order. It also confirms that the exponential Rosenbrock multistep methods are more accurate than the exponential multistep methods.

5. Conclusions

In this paper, we propose the Adams type exponential multistep methods for semilinear stiff delay differential equations. The exponential multistep methods are obtained by applying the linear multistep method directly to the integral form of the equation, while the exponential Rosenbrock multistep methods are constructed by applying the exponential multistep methods to a transformed form of the equation. We show that the k-step exponential multistep method is stiffly convergent of order k and the stiff convergence order of the k-step exponential Rosenbrock multistep method is k + 1 . We demonstrate that the exponential Rosenbrock multistep methods are superior to the exponential multistep methods in terms of accuracy. In summary, for semilinear stiff delay differential equations, this work provided efficient numerical methods with fewer restriction conditions.
Finally, we point out that, while the proposed exponential Rosenbrock multistep methods can be easily applied to nonlinear DDEs, the convergence analysis is restricted to semilinear stiff DDEs. The convergence property of the methods for nonlinear DDEs should be investigated in future research. Moreover, fractional [29] and stochastic [30] differential equations are also widely used in a variety of fields of applied mathematics. The exponential integrators for these problems need to be validated in the future.

Author Contributions

Conceptualization, R.Z. (Rui Zhan) and R.Z. (Runjie Zhang); methodology, R.Z. (Rui Zhan); software, W.C.; validation, X.C.; formal analysis, R.Z. (Rui Zhan); writing—original draft preparation, W.C. and X.C.; writing—review and editing, R.Z. (Rui Zhan) and R.Z. (Runjie Zhang). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under Grant No. 12001115.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

To determine the starting values y 1 , y 2 , , y k 1 for a k-step exponential multistep method, we approximate g in the integral form (2) by the interpolation polynomial p 0 through the points ( t 0 , G 0 ) , ( t 1 , G 1 ) , …, ( t k 1 , G k 1 ) . Then, we have
p 0 ( t 0 + θ h ) = j = 0 k 1 θ j j G 0 .
Here, j G n denotes the jth forward difference, which defined recursively by
0 G n = G n , j G n = j 1 G n + 1 j 1 G n , j 1 .
Replacing g by polynomial p 0 gives the desired starting values
y s = e s h A y 0 + h j = 0 k 1 γ s , j ( h A ) j G 0 , s = 1 , 2 , , k 1 .
The coefficients γ s , j are defined as
γ s , j ( z ) = 0 s e ( s θ ) h A θ j d θ = s 0 1 e s ( 1 θ ) h A s θ j d θ .
Compared (A2) with (5), the coefficients γ s , j can also be expressed as the linear combination of the φ -functions. For k up to 7, we have
γ s , 0 ( z ) = s φ 1 ( s z ) , γ s , 1 ( z ) = s 2 φ 2 ( s z ) , γ s , 2 ( z ) = s 3 φ 3 ( s z ) 1 2 s 2 φ 2 ( s z ) , γ s , 3 ( z ) = s 4 φ 4 ( s z ) s 3 φ 3 ( s z ) + 1 3 s 2 φ 2 ( s z ) , γ s , 4 ( z ) = s 5 φ 5 ( s z ) 3 2 s 4 φ 4 ( s z ) + 11 12 s 3 φ 3 ( s z ) 1 4 s 2 φ 2 ( s z ) , γ s , 5 ( z ) = s 6 φ 6 ( s z ) 2 s 5 φ 5 ( s z ) + 7 s 4 4 φ 4 ( s z ) 5 s 3 6 φ 3 ( s z ) + 1 5 s 2 φ 2 ( s z ) , γ s , 6 ( z ) = s 7 φ 7 ( s z ) 5 s 6 2 φ 6 ( s z ) + 17 s 5 6 φ 5 ( s z ) 15 s 4 8 φ 4 ( s z ) + 137 s 3 180 φ 3 ( z ) 1 6 s 2 φ 2 ( z ) .
As pointed out in [12], under appropriate assumptions, the nonlinear system (A1) has a unique solution. The solution can be obtained by fixed point iteration.

Appendix B

For the exponential Rosenbrock multistep method (17), the starting values y 1 , y 2 , , y k 1 are approximated by replacing the nonlinearity r 0 in (16) by the Hermite interpolation polynomial p ^ 0 , namely, by
p ^ 0 ( t 0 + θ h ) = R ^ 0 0 + j = 1 k 1 θ j k j θ k j R ^ 0 0 .
Together with the interpolation polynomial η ^ 0 for y ( t 0 + θ h τ ) , for s = 1 , 2 , , k 1 , the starting value y s is computed by
y s = e s h J 0 y n + s h φ 1 ( s h J 0 ) t 0 d 0 + s 2 h 2 φ 2 ( s h J 0 ) d 0 + h j = 0 k γ s , j ( h J 0 ) J 0 τ j y 0 τ + s h φ 1 ( s h J 0 ) R ^ 0 0 + h j = 1 k 1 γ s , j ( h J 0 ) k j γ s , k ( h J 0 ) j R ^ 0 0 .

References

  1. MacDonals, N. Biological Delay Systems: Linear Stability Theory; Cambridge University Press: Cambridge, UK, 1989. [Google Scholar]
  2. Hu, H. Dynamics of Controlled Mechanical Systems with Delayed Feedback; Springer: New York, NY, USA, 2002. [Google Scholar]
  3. Kuang, Y. Delay Differential Equations with Applications in Population Dynamics; Academic: Boston, MA, USA, 1993. [Google Scholar]
  4. Stépán, G. Retarded Dynamical Systems: Stability and Characteristic Functions; Longman Scientific and Technical: Marlow, NY, USA, 1989. [Google Scholar]
  5. Hale, J. History of delay equations. In Delay Differential Equations and Applications; Arino, O., Hbid, M., Dads, E.A., Eds.; Springer: Amsterdam, The Netherlands, 2006; pp. 1–28. [Google Scholar]
  6. Abd-Rabo, M.A.; Zakarya, M.; Cesarano, C.; Aly, S. Bifurcation analysis of time-delay model of consumer with the advertising effect. Symmetry 2021, 13, 417. [Google Scholar] [CrossRef]
  7. Duary, A.; Das, S.; Arif, M.G.; Abualnaja, K.M.; Khan, M.A.A.; Zakarya, M.; Shaikh, A.A. Advance and delay in payments with the price-discount inventory model for deteriorating items under capacity constraint and partially backlogged shortages. Alex. Eng. J. 2022, 61, 1735–1745. [Google Scholar] [CrossRef]
  8. Bellen, A.; Zennaro, M. Numerical Methods for Delay Differential Equations; Oxford University Press: New York, NY, USA, 2003. [Google Scholar]
  9. Certaine, J. The solution of ordinary differential equations with large time constants. In Mathematical Methods for Digital Computers; Ralston, A., Wilf, H., Eds.; Wiley: New York, NY, USA, 1960; pp. 128–132. [Google Scholar]
  10. Hochbruck, M.; Ostermann, A. Explicit exponential Runge–Kutta methods for semilinear parabolic problems. SIAM J. Numer. Anal. 2005, 43, 1069–1090. [Google Scholar] [CrossRef] [Green Version]
  11. Hochbruck, M.; Ostermann, A. Exponential Runge–Kutta methods for parabolic problems. Appl. Numer. Math. 2005, 53, 323–339. [Google Scholar] [CrossRef] [Green Version]
  12. Calvo, M.P.; Palencia, C. A Class of Explicit Multistep Exponential Integrators for Semilinear Problems. Numer. Math. 2006, 102, 367–381. [Google Scholar] [CrossRef]
  13. Hochbruck, M.; Ostermann, A. Exponential multistep methods of Adams-type. BIT 2011, 51, 889–908. [Google Scholar] [CrossRef] [Green Version]
  14. Hochbruck, M.; Ostermann, A.; Schweitzer, J. Exponential Rosenbrock-type methods. SIAM J. Numer. Anal. 2009, 47, 786–803. [Google Scholar] [CrossRef]
  15. Luan, V.T.; Ostermann, A. Exponential Rosenbrock methods of order five—Construction, analysis and numerical comparisons. J. Comput. Appl. Math. 2014, 255, 417–431. [Google Scholar] [CrossRef]
  16. Luan, V.T.; Ostermann, A. Parallel exponential Rosenbrock methods. Comput. Math. Appl. 2016, 71, 1137–1150. [Google Scholar] [CrossRef]
  17. Koskela, A.; Ostermann, A. Exponential Taylor methods: Analysis and implementation. Comput. Math. Appl. 2013, 65, 487–499. [Google Scholar] [CrossRef]
  18. Ostermann, A.; Thalhammer, M.; Wright, W. A class of explicit exponential general linear methods. BIT 2006, 46, 409–431. [Google Scholar] [CrossRef]
  19. Minchev, B.; Wright, W. A Review of Exponential Integrators for First Order Semi-Linear Problems; Technical Report; Norwegian University of Science and Technology: Trondheim, Norway, 2005. [Google Scholar]
  20. Zhao, J.; Zhan, R.; Xu, Y. D-convergence and conditional GDN-stability of exponential Runge–Kutta methods for semilinear delay differential equations. Appl. Math. Comput. 2018, 339, 45–58. [Google Scholar] [CrossRef]
  21. Zhao, J.; Zhan, R.; Xu, Y. Explicit exponential Runge–Kutta methods for semilinear parabolic delay differential equations. Math. Comput. Simul. 2020, 178, 366–381. [Google Scholar] [CrossRef]
  22. Fang, J.; Zhan, R. High order explicit exponential Runge–Kutta methods for semilinear delay differential equations. J. Comput. Appl. Math. 2021, 388. [Google Scholar] [CrossRef]
  23. Moler, C.; Loan, C.V. Nineteen Dubious Ways to Compute the Exponential of a Matrix, Twenty-Five Years Later. SIAM Rev. 2003, 45, 3–49. [Google Scholar] [CrossRef]
  24. Al-Mohy, A.; Higham, N. Computing the action of the matrix exponential, with an application to the exponential integrators. SIAM J. Sci. Comput. 2011, 33, 488–511. [Google Scholar] [CrossRef]
  25. Jawecki, T. A study of defect-based error estimates for the Krylov approximation of ϕ-functions. Numer. Algorithms 2022, 90, 323–361. [Google Scholar] [CrossRef]
  26. Li, D.; Yang, S.; Lan, J. Efficient and accurate computation for the ϕ-functions arising from exponential integrators. Calcolo 2022, 59, 1–24. [Google Scholar] [CrossRef]
  27. Pazy, A. Semigroups of Linear Operators and Applications to Partial Differential Equations; Springer: New York, NY, USA, 1983. [Google Scholar]
  28. Hairer, E.; Norsett, S.; Wanner, G. Solving Ordinary Differential Equations. I. Nonstiff Problems; Springer Series in Computational Mathematics; Springer: Heidelberg, Germany; New York, NY, USA, 1993. [Google Scholar]
  29. Ali, K.K.; Mohamed, E.M.; Abd El salam, M.A.; Nisar, K.S.; Khashan, M.M.; Zakarya, M. A collocation approach for multiterm variable-order fractional delay-differential equations using shifted Chebyshev polynomials. Alex. Eng. J. 2022, 61, 3511–3526. [Google Scholar] [CrossRef]
  30. Ghany, H.A.; Ibrahim, M.Z.N. Generalized solutions of wick-type stochastic KdV-Burgers equations using exp-function method. Int. Rev. Phys. 2014, 8. [Google Scholar]
Figure 1. Errors of k-step exponential multistep method for problem (23). k = 1 , 2 , 3 , 4 . D = σ = a = b = c = 1 .
Figure 1. Errors of k-step exponential multistep method for problem (23). k = 1 , 2 , 3 , 4 . D = σ = a = b = c = 1 .
Axioms 11 00185 g001
Figure 2. Errors of k-step exponential Rosenbrock multistep method for problem (23). k = 1 , 2 , 3 , 4 . D = σ = a = b = c = 1 .
Figure 2. Errors of k-step exponential Rosenbrock multistep method for problem (23). k = 1 , 2 , 3 , 4 . D = σ = a = b = c = 1 .
Axioms 11 00185 g002
Figure 3. Errors of k-step exponential multistep method for problem (24). k = 1 , 2 , 3 , 4 . D 1 = a 1 = b 1 = r 2 = b 2 = a 2 = 1 , D 2 = r 1 = 2 .
Figure 3. Errors of k-step exponential multistep method for problem (24). k = 1 , 2 , 3 , 4 . D 1 = a 1 = b 1 = r 2 = b 2 = a 2 = 1 , D 2 = r 1 = 2 .
Axioms 11 00185 g003
Figure 4. Errors of k-step exponential Rosenbrock multistep method for problem (24). k = 1 , 2 , 3 , 4 . D 1 = a 1 = b 1 = r 2 = b 2 = a 2 = 1 , D 2 = r 1 = 2 .
Figure 4. Errors of k-step exponential Rosenbrock multistep method for problem (24). k = 1 , 2 , 3 , 4 . D 1 = a 1 = b 1 = r 2 = b 2 = a 2 = 1 , D 2 = r 1 = 2 .
Axioms 11 00185 g004
Table 1. CPU times of all the methods achieving error tolerance 10 8 for problem (23).
Table 1. CPU times of all the methods achieving error tolerance 10 8 for problem (23).
k = 2 k = 3 k = 4
Adams linear multistep methods 55.844 40.141 34.344
exponential multistep method 11.048 5.1875 1.3281
exponential Rosenbrock multistep method 4.7969 1.7813 1.2656
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhan, R.; Chen, W.; Chen, X.; Zhang, R. Exponential Multistep Methods for Stiff Delay Differential Equations. Axioms 2022, 11, 185. https://doi.org/10.3390/axioms11050185

AMA Style

Zhan R, Chen W, Chen X, Zhang R. Exponential Multistep Methods for Stiff Delay Differential Equations. Axioms. 2022; 11(5):185. https://doi.org/10.3390/axioms11050185

Chicago/Turabian Style

Zhan, Rui, Weihong Chen, Xinji Chen, and Runjie Zhang. 2022. "Exponential Multistep Methods for Stiff Delay Differential Equations" Axioms 11, no. 5: 185. https://doi.org/10.3390/axioms11050185

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop