Next Article in Journal
On Discrete Poisson–Mirra Distribution: Regression, INAR(1) Process and Applications
Previous Article in Journal
A Survey on the k-Path Vertex Cover Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Continuous-Stage Runge–Kutta Approximation to Differential Problems

1
Dipartimento di Matematica, Università di Bari, Via Orabona 4, 70125 Bari, Italy
2
Dipartimento di Matematica e Informatica “U. Dini”, Università di Firenze, Viale Morgagni 67/A, 50134 Firenze, Italy
*
Author to whom correspondence should be addressed.
Axioms 2022, 11(5), 192; https://doi.org/10.3390/axioms11050192
Submission received: 21 March 2022 / Revised: 12 April 2022 / Accepted: 18 April 2022 / Published: 21 April 2022

Abstract

:
In recent years, the efficient numerical solution of Hamiltonian problems has led to the definition of a class of energy-conserving Runge–Kutta methods named Hamiltonian Boundary Value Methods (HBVMs). Such methods admit an interesting interpretation in terms of continuous-stage Runge–Kutta methods. In this review paper, we recall this aspect and extend it to higher-order differential problems.

1. Introduction

Continuous-stage Runge–Kutta methods (csRK methods, hereafter), introduced by Butcher [1,2,3], have been used in recent years as a useful tool for studying structure-preserving methods for Hamiltonian problems (see, e.g., [4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19]). In particular, we shall at first consider methods, within this latter class, having Butcher tableau in the form:
c a ( c , τ ) b ( c ) ,
where
a : [ 0 , 1 ] × [ 0 , 1 ] R , b : [ 0 , 1 ] [ 0 , ) ,
are suitable functions defining the method. Hereafter, we shall use the notation (1), in place of the more commonly used a c τ , b c , in order to make clear that these are functions of the respective arguments. For later use, we shall also denote
a ˙ ( c , τ ) = d d c a ( c , τ ) .
When used for solving the ODE-IVP
y ˙ ( t ) = f ( y ( t ) ) , t [ 0 , h ] , y ( 0 ) = y 0 R m ,
the method (1)–(3) provides an approximation
σ ˙ ( c h ) = 0 1 a ˙ ( c , τ ) f ( σ ( τ h ) ) d τ , c [ 0 , 1 ] , σ ( 0 ) = y 0 ,
to (4), and therefore,
σ ( c h ) = y 0 + h 0 1 a ( c , τ ) f ( σ ( τ h ) ) d τ , c [ 0 , 1 ] ,
with the approximation to y ( h ) given by
y 1 = y 0 + h 0 1 b ( c ) f ( σ ( c h ) ) d c .
In particular, we shall hereafter consider the natural choice
b ( c ) 1 , c [ 0 , 1 ] ,
though different choices have been also considered [9]. In such a case, because of consistency, one requires that
0 1 b ( c ) d c = 1 .
In this review paper, we consider the csRKs derived from the energy-conserving methods called Hamiltonian Boundary Value Methods (HBVMs), which have been the subject of many investigations in recent years (see, e.g., [4,5,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37]). In particular, the used arguments strictly follow those in [21] (in turn, inspired by [12,38,39]). However, we also consider an interesting generalization, with respect to the arguments in [21], as explained below.
With this premise, the structure of the paper is as follows: in Section 2 we recall the basic facts concerning the initial value problem (4); in Section 3 we generalize the approach to the case of second-order problems, thereby obtaining a continuous-stage Runge–Kutta-Nyström method (csRKN method, hereafter), whose generalization for kth-order problems is also sketched out; Section 4 is devoted to derive relevant families of methods for the previous classes of problems, and we report some numerical testing; at last, in Section 5 a few conclusions are given.

2. Approximation of ODE-IVPs

Let us consider, at first, the initial value problem (4). As done in [21], for our analysis we shall use an expansion of the vector field along the orthonormal Legendre polynomial basis:
P i Π i , 0 1 P i ( x ) P j ( x ) d x = δ i j , i , j = 0 , 1 , ,
where, as usual, Π i denotes the vector space of polynomials of degree i, and δ i j is the Kronecker symbol. Consequently, assuming for sake of simplicity that f is analytical, we can rewrite (4) as
y ˙ ( c h ) = j 0 P j ( c ) γ j ( y ) , c [ 0 , 1 ] , y ( 0 ) = y 0 R m ,
with
γ j ( y ) = 0 1 P j ( τ ) f ( y ( τ h ) ) d τ , j = 0 , 1 , .
Integrating side-by-side, and imposing the initial condition, one then obtains that the solution of (10) is formally given by
y ( c h ) = y 0 + h j 0 0 c P j ( x ) d x γ j ( y ) , c [ 0 , 1 ] .
For c = 1 , by considering that, by virtue of (9), 0 1 P j ( x ) d x = δ j 0 , and taking into account (4), one obtains:
y ( h ) = y 0 + h 0 1 f ( y ( c h ) ) d c y 0 + 0 h y ˙ ( t ) d t ,
i.e., the fundamental theorem of the calculus. Interestingly, by setting
a ( c , τ ) = j 0 0 c P j ( x ) d x P j ( τ ) ,
one obtains that (10)–(13) can be rewritten (see (3)), respectively, as:
y ˙ ( c h ) = 0 1 a ˙ ( c , τ ) f ( y ( τ h ) ) d τ , y ( 0 ) = y 0 R m , y ( c h ) = y 0 + h 0 1 a ( c , τ ) f ( y ( τ h ) ) d τ , c [ 0 , 1 ] , y ( h ) = y 0 + h 0 1 f ( y ( c h ) ) d c .
Consequently,
c a ( c , τ ) 1
is the csRK “method” providing the exact solution to the problem.

2.1. Vector Formulation

For later use, we now rewrite (14) in vector form. For this purpose, let us introduce the infinite vectors,
P ( c ) = P 0 ( c ) P 1 ( c ) , I ( c ) = 0 c P ( x ) d x ,
and the infinite matrix,
X = ξ 0 ξ 1 ξ 1 0 ξ 2 ξ 2 , ξ i = 1 2 | 4 i 2 1 | , i = 0 , 1 , ,
and recall that, by virtue of (9), and due to the well-known relations between the Legendre polynomials and their integrals,
I ( c ) = P ( c ) X , 0 1 P ( τ ) P ( τ ) d τ = I , 0 1 P ( τ ) I ( τ ) d τ = X ,
with I being the identity operator. Consequently, (14) can be rewritten as
a ( c , τ ) = I ( c ) P ( τ ) = P ( c ) X P ( τ )
and, in particular, one may regard the Butcher tableau, equivalent to (16),
c P ( c ) X P ( τ ) 1 ,
as the corresponding W-transformation [40] of the continuous problem.
It is worth mentioning that, by defining the infinite vector (see (11))
γ : = γ 0 ( y ) γ 1 ( y ) 0 1 P ( τ ) I m f ( y ( τ h ) ) d τ
where, as is usual, ⊗ denotes the Kronecker product, then (12) can be rewritten as
y ( c h ) = y 0 + h I ( c ) I m γ
and, consequently, the vector γ satisfies:
γ = 0 1 P ( τ ) I m f ( y 0 + h I ( τ ) I m γ ) d τ ,
with (compare with (15))
y ( h ) = y 0 + h γ 0 ( y ) .

2.2. Polynomial Approximation

In order to derive a polynomial approximation σ Π s of (15), it suffices to truncate the infinite series in (14) after s terms:
a s ( c , τ ) = j = 0 s 1 0 c P j ( x ) d x P j ( τ ) ,
so that
a ˙ s ( c , τ ) = j = 0 s 1 P j ( c ) P j ( τ ) ,
and therefore, (15) is approximated by
σ ˙ ( c h ) = 0 1 a ˙ s ( c , τ ) f ( σ ( τ h ) ) d τ , σ ( 0 ) = y 0 R m , σ ( c h ) = y 0 + h 0 1 a s ( c , τ ) f ( σ ( τ h ) ) d τ , c [ 0 , 1 ] , y 1 : = σ ( h ) = y 0 + h 0 1 f ( σ ( c h ) ) d c .
Remark 1.
As it was shown in [21], the csRK method
c a s ( c , τ ) 1 ,
with a s ( c , τ ) given by (25), is equivalent to the energy-preserving method, named HBVM ( , s ) , introduced in [4] (in particular, when s = 1 one obtains the AVF method [10]; see also [6] for a related approach).
The following property holds true ([5] Theorem 1):
y 1 y ( h ) = O ( h 2 s + 1 ) ;
i.e., the approximation procedure has order 2 s .
Remark 2.
If   H : R m R ,  and
f ( y ) = J H ( y ) , w i t h J = J ;
then   H ( y 1 ) = H ( y 0 ) . In the case of Hamiltonian problems, H is the energy of the system. Consequently, the csRK method (28) is energy-conserving, as is shown in [5] Theorem 3).
A corresponding vector formulation of the csRK method (28) can be derived by replacing the infinite vectors and matrix in (17)–(18) with
P r ( c ) = P 0 ( c ) P r 1 ( c ) , r = s , s + 1 , I s ( c ) = 0 c P s ( x ) d x ,
and the matrices
X ^ s = ξ 0 ξ 1 ξ 1 0 ξ s 1 ξ s 1 0 ξ s X s 0 0 ξ s R s + 1 × s ,
such that
I s ( c ) = P s + 1 ( c ) X ^ s , 0 1 P r ( τ ) P r ( τ ) d τ = I r , 0 1 P s ( τ ) I s ( τ ) d τ = X s ,
with I r R r × r being the identity matrix. Consequently, with reference to (25), one has
a s ( c , τ ) = I s ( c ) P s ( τ ) = P s + 1 ( c ) X ^ s P s ( τ ) ,
and in particular, one may regard the Butcher tableau, equivalent to (28),
c P s + 1 ( c ) X ^ s P s ( τ ) 1 ,
as the corresponding W-transformation of a HBVM ( , s ) method [21].
Further, according to [32], setting the vector
γ s : = γ 0 ( σ ) γ s 1 ( σ ) ,
with γ j ( σ ) defined as in (11), by formally replacing y with σ , one has that such vector satisfies the equation (compare with (23))
γ s = 0 1 P s ( τ ) I m f ( y 0 + h I s ( τ ) I m γ s ) d τ ,
with (compare with (27) and (24))
y 1 = y 0 + h γ 0 ( σ ) .

3. Approximation of Special Second-Order ODE-IVPs

An interesting particular case is that of special second order problems, namely, problems in the form
y ¨ ( t ) = f ( y ( t ) ) , t [ 0 , h ] , y ( 0 ) = y 0 , y ˙ ( 0 ) = y ˙ 0 R m ,
which, in turn, form a special case of the more general problems:
y ¨ ( t ) = f ( y ( t ) , y ˙ ( t ) ) , t [ 0 , h ] , y ( 0 ) = y 0 , y ˙ ( 0 ) = y ˙ 0 R m .
Let us study, at first, the special problem (38), which is very important in many applications (as an example, separable Hamiltonian problems are in such a form); then we discuss (39). Setting y ˙ ( t ) = p ( t ) , and expanding the right-hand sides along the Legendre basis, gives:
y ˙ ( c h ) = P ( c ) I m 0 1 P ( τ ) I m p ( τ h ) d τ , p ˙ ( c h ) = P ( c ) I m 0 1 P ( τ ) I m f ( y ( τ h ) ) d τ , c [ 0 , 1 ] .
Remark 3.
It is worth mentioning that, with reference to (3) and (20), the two previous equations can be rewritten as:
y ˙ ( c h ) = 0 1 a ˙ ( c , τ ) p ( τ h ) d τ , p ˙ ( c h ) = 0 1 a ˙ ( c , τ ) f ( y ( τ h ) ) d τ , c [ 0 , 1 ] .
Integrating side by side (40), and imposing the initial condition then gives:
y ( c h ) = y 0 + h I ( c ) I m 0 1 P ( τ ) I m p ( τ h ) d τ , p ( c h ) = y ˙ 0 + h I ( c ) I m 0 1 P ( τ ) I m f ( y ( τ h ) ) d τ = y ˙ 0 + h I ( c ) I m γ , c [ 0 , 1 ] ,
with the vector γ formally still given by (22). Setting, in general, e i the infinite vector whose jth entry is δ i j , substitution of the second equation in (42) into the first one, taking into account that
0 1 P ( τ ) d τ = e 1 1 0 , I ( c ) e 1 = c ,
and considering (19) and again (22), gives:
y ( c h ) = y 0 + h I ( c ) I m 0 1 P ( τ ) I m y ˙ 0 + h I ( τ ) I m γ d τ = y 0 + c h y ˙ 0 + h 2 I ( c ) I m 0 1 P ( τ ) I ( τ ) d τ I m γ = y 0 + c h y ˙ 0 + h 2 I ( c ) X I m γ y 0 + c h y ˙ 0 + h 2 0 1 I ( c ) X P ( τ ) I m f ( y ( τ h ) ) d τ .
By setting
a ¯ ( c , τ ) = I ( c ) X P ( τ ) P ( c ) X 2 P ( τ ) ,
one has that (42) can be rewritten as:
y ( c h ) = y 0 + c h y ˙ 0 + h 2 0 1 a ¯ ( c , τ ) f ( y ( τ h ) ) d τ , c [ 0 , 1 ] .
Remark 4.
We observe that, from the second equation in (40), (41), and considering (20), one derives:
p ( c h ) = y ˙ 0 + h 0 1 a ( c , τ ) f ( y ( τ h ) ) d τ , c [ 0 , 1 ] .
Moreover, it is worth mentioning that (see (44), (14), and (19))
a ¯ ( c , τ ) = 0 1 I ( c ) P ( ξ ) I ( ξ ) P ( τ ) d ξ 0 1 a ( c , ξ ) a ( ξ , τ ) d ξ .
From (40) and (45), one obtains that the values at h will be given by:
y ˙ ( h ) p ( h ) = y ˙ 0 + h e 1 I m γ = y ˙ 0 + h γ 0 ( y ) y ˙ 0 + h 0 1 f ( y ( c h ) ) d c ,
and (see (18))
y ( h ) = y 0 + h y ˙ 0 + h 2 e 1 X 0 1 P ( c ) I m f ( y ( c h ) ) d c = y 0 + h y ˙ 0 + h 2 e 1 X 0 1 P ( c ) I m f ( y ( c h ) ) d c = y 0 + h y ˙ 0 + h 2 0 1 ( ξ 0 e 1 ξ 1 e 2 ) P ( c ) I m f ( y ( c h ) ) d c = y 0 + h y ˙ 0 + h 2 0 1 [ ξ 0 P 0 ( c ) ξ 1 P 1 ( c ) ] f ( y ( c h ) ) d c y 0 + h y ˙ 0 + h 2 0 1 ( 1 c ) f ( y ( c h ) ) d c .
In other words, the exact solution of problem (38) is generated by the following csRKN “method”:
c a ¯ ( c , τ ) 1 c 1 c I ( c ) X P ( τ ) 1 c 1 c P ( c ) X 2 P ( τ ) 1 c 1 .

3.1. Vector Formulation

An interesting alternative formulation of the method (50), akin to (23) and (24) for first order problems, can be derived by combining (22) and (43):
γ = 0 1 P ( τ ) I m f y 0 + τ h y ˙ 0 + h 2 I ( τ ) X I m γ d τ ,
with values at t = h given by (compare with (48) and (49)):
y ˙ ( h ) = y ˙ 0 + h γ 0 ( y ) , y ( h ) = y 0 + h y ˙ 0 + h 2 ξ 0 γ 0 ( y ) ξ 1 γ 1 ( y ) .

3.2. The Case of the General Second-Order Problem

The arguments used above can be extended to cope with problem (39) in a straightforward way. In fact, by following similar steps as before, (45) and (46) now become:
y ( c h ) = y 0 + c h y ˙ 0 + h 2 0 1 a ¯ ( c , τ ) f ( y ( τ h ) , y ˙ ( τ h ) ) d τ , y ˙ ( c h ) = y ˙ 0 + h 0 1 a ( c , τ ) f ( y ( τ h ) , y ˙ ( τ h ) ) d τ , c [ 0 , 1 ] ,
with the values at h given by
y ˙ 0 + h 0 1 f ( y ( c h ) , y ˙ ( c h ) ) ) d c , y ( h ) = y 0 + h y ˙ 0 + h 2 0 1 ( 1 c ) f ( y ( c h ) , y ˙ ( c h ) ) d c .
Consequently, we obtain the general csRKN method
c a ¯ ( c , τ ) a ( c , τ ) 1 c 1 c I ( c ) X P ( τ ) I ( c ) P ( τ ) 1 c 1 c P ( c ) X 2 P ( τ ) P ( c ) X P ( τ ) 1 c 1 ,
in place of (50).
The bad news is that now the system of Equations (53) has a doubled size, with respect to (45). On the other hand, the good news is that, upon modifying the definition of the vector γ in (22) as follows,
γ : = γ 0 ( y , y ˙ ) γ 1 ( y , y ˙ ) 0 1 P ( τ ) I m f ( y ( τ h ) , y ˙ ( τ h ) ) d τ ,
one has that the Equations (53) can be rewritten as
y ( c h ) = y 0 + c h y ˙ 0 + h 2 I ( c ) X I m γ , y ˙ ( c h ) = y ˙ 0 + h I ( c ) I m γ , c [ 0 , 1 ] .
Consequently, we obtain again a single equation for the vector γ defined in (56):
γ = 0 1 P ( τ ) I m f y 0 + τ h y ˙ 0 + h 2 I ( τ ) X I m γ , y ˙ 0 + h I ( c ) I m γ d τ .
Similarly, the new approximations in (54) become:
y ˙ ( h ) = y ˙ 0 + h γ 0 ( y , y ˙ ) , y ( h ) = y 0 + h y ˙ 0 + h 2 ξ 0 γ 0 ( y , y ˙ ) ξ 1 γ 1 ( y , y ˙ ) .
Remarkably enough, the Equations (57) and (58) are very similar to (51) and (52), respectively.

3.3. Polynomial Approximation

Polynomial approximations of degree s to (40), σ ( c h ) y ( c h ) and σ 1 ( c h ) p ( c h ) , can be obtained by formal substitution of the matrices P ( c ) , I ( c ) and X in (40)–(49) with the corresponding finite ones, P s ( c ) , I s ( c ) P s + 1 ( c ) X ^ s and X s defined in (30) and (31). Consequently, following similar steps as above, one obtains:
σ ˙ ( c h ) = P s ( c ) I m 0 1 P s ( τ ) I m σ 1 ( τ h ) d τ , σ ˙ 1 ( c h ) = P s ( c ) I m 0 1 P s ( τ ) I m f ( σ ( τ h ) ) d τ , c [ 0 , 1 ] ,
in place of (40). Similarly, setting (in place of (44))
a ¯ s ( c , τ ) = I s ( c ) X s P s ( τ ) P s + 1 ( c ) X ^ s X s P s ( τ ) ,
from (59) one derives:
σ ( c h ) = y 0 + c h y ˙ 0 + h 2 0 1 a ¯ s ( c , τ ) f ( σ ( τ h ) ) d τ , c [ 0 , 1 ] ,
with the new approximations,
y 1 : = σ ( h ) a n d y ˙ 1 : = σ 1 ( h ) ,
given by
y ˙ 1 = y ˙ 0 + h 0 1 f ( σ ( c h ) ) d c , y 1 = y 0 + h y ˙ 0 + h 2 0 1 ( 1 c ) f ( σ ( c h ) ) d c .
Remark 5.
Similarly to (41), one derives:
σ ˙ ( c h ) = 0 1 a ˙ s ( c , τ ) σ 1 ( τ h ) d τ , σ ˙ 1 ( c h ) = 0 1 a ˙ s ( c , τ ) f ( σ ( τ h ) ) d τ , c [ 0 , 1 ] .
Moreover, (compare with (46)), from the second equation in (59), and considering (33), one obtains:
σ 1 ( c h ) = y ˙ 0 + h 0 1 a s ( c , τ ) f ( σ ( τ h ) ) d τ , c [ 0 , 1 ] .
At last, (compare with (47)), from (60), (33) and (32), one has:
a ¯ s ( c , τ ) = 0 1 I s ( c ) P s ( ξ ) I s ( ξ ) P s ( τ ) d ξ 0 1 a s ( c , ξ ) a s ( ξ , τ ) d ξ .
Summing all up, through (60)–(62) we have defined the following csRKN method:
c a ¯ s ( c , τ ) 1 c 1 c I s ( c ) X s P s ( τ ) 1 c 1 c P s + 1 ( c ) X ^ s X s P s ( τ ) 1 c 1 .
Remark 6.
This latter method is equivalent to the csRKN method obtained by applying the HBVM ( , s ) method to the special second-order problem (38) [21].
An alternative formulation of the method (66) can be obtained by repeating arguments similar to those used in Section 3.1. In fact, by setting the vector (compare with (35)–(37))
γ s : = γ j ( σ ) γ s 1 ( σ ) 0 1 P s ( τ ) I m f ( σ ( τ h ) ) d τ ,
by virtue of (60) and (61), such a vector satisfies the equation
γ s = 0 1 P s ( τ ) I m f y 0 + τ h y ˙ 0 + h 2 I s ( τ ) X s I m γ s d τ ,
with the new approximations given by (compare with (52)):
y ˙ 1 = y ˙ 0 + h γ 0 ( σ ) , y 1 = y 0 + h y ˙ 0 + h 2 ξ 0 γ 0 ( σ ) ξ 1 γ 1 ( σ ) .
For the general problem (39), following similar steps as above, the generalized csRKN method (55) becomes, with reference to (33):
c a ¯ s ( c , τ ) a s ( c , τ ) 1 c 1 c I s ( c ) X s P s ( τ ) I s ( c ) P s ( τ ) 1 c 1 c P s + 1 ( c ) X ^ s X s P s ( τ ) P s + 1 ( c ) X ^ s P s ( τ ) 1 c 1 .
Additionally, we can derive a vector formulation similar to (67) and (68). In fact, by defining the vector
γ s : = γ j ( σ , σ 1 ) γ s 1 ( σ , σ 1 ) 0 1 P s ( τ ) I m f ( σ ( τ h ) , σ 1 ( τ h ) ) d τ ,
it turns out that it satisfies the equation
γ s = 0 1 P s ( τ ) I m f y 0 + τ h y ˙ 0 + h 2 I s ( τ ) X s I m γ s , y ˙ 0 + h I s ( τ ) I m γ s d τ ,
with the new approximations given by:
y ˙ 1 = y ˙ 0 + h γ 0 ( σ , σ 1 ) , y 1 = y 0 + h y ˙ 0 + h 2 ξ 0 γ 0 ( σ , σ 1 ) ξ 1 γ 1 ( σ , σ 1 ) .
Remark 7.
By comparing the discrete problems (68) and (72), one realizes that they have the same dimension, independently of the fact that the latter one solves the general problem (39). This fact is even more striking since, as we are going to sketch out in the next section, this will be the case for a general kth order ODE-IVP.

3.4. Approximation of General kth-Order ODE-IVPs

Let us consider the case of a general kth order problem:
y ( k ) ( c h ) = f ( y ( c h ) , y ( 1 ) ( c h ) , , y ( k 1 ) ( c h ) ) , c [ 0 , 1 ] , y ( i ) ( 0 ) = y 0 ( i ) R m , i = 0 , , k 1 .
Hereafter, for sake of brevity, we shall skip the arguments of the Fourier coefficients γ i . Then, by repeating similar steps as above, by defining the infinite vector
γ γ 0 γ 1 : = 0 1 P ( τ ) I m f ( y ( τ h ) , , y ( k 1 ) ( τ h ) ) d τ ,
one has that it satisfies the equation:
γ = 0 1 P ( τ ) I m f i = 0 k 1 ( τ h ) i i ! y 0 ( i ) + h k I ( τ ) X k 1 I m γ , , y 0 ( k 1 ) + h I ( τ ) I m γ d τ ,
with the values at t = h given by:
y ( i ) ( h ) = j = 0 k 1 i h j j ! y 0 ( j + i ) + h k i j = 0 k 1 i b j ( k 1 i ) γ j , i = 0 , , k 1 ,
b j ( k 1 i ) j = 0 , , k 1 i ,  being the ( j + 1 ) st entry on the first row of the matrix
X k 1 i , i = 0 , , k 1 .
A polynomial approximation of degree s (resulting, as usual, into an order 2 s method) can be derived by formally substituting, in the Equation (76), P ( τ ) , I ( τ ) , and X , with P s ( τ ) , I s ( τ ) , and X s , respectively, (consequently, the vector γ now belongs to R s m ), with the new approximations y 1 ( i ) y ( i ) ( h ) , given by:
y 1 ( i ) : = j = 0 k 1 i h j j ! y 0 ( j + i ) + h k i j = 0 k 1 i b j ( k 1 i ) γ j , i = 0 , , k 1 ,
b j ( k 1 i ) j = 0 , , k 1 i ,  being now the ( j + 1 ) st entry on the first row of the matrix
X s k 1 i , i = 0 , , k 1 .

4. Discretization

“As is well known, even many relatively simple integrals cannot be expressed in finite terms of elementary functions, and thus must be evaluated by numerical methods.”
Dahlquist and Björk [41] p. 521
As is clear, the integrals involved in the Fourier coefficients (36) need to be approximated by using a suitable quadrature rule, which we choose as the interpolatory Gauss–Legendre quadrature of order 2 k , with k s , having weights and abscissae ( b i , c i ) , i = 1 , , k . In so doing, the vector of the Fourier coefficients (35) becomes
γ ^ s : = γ ^ 0 γ ^ s 1 ,
satisfying the Equation (compare with (36))
γ ^ s = j = 1 k b j P s ( c j ) I m f ( y 0 + h I s ( c j ) I m γ ^ s )
with (compare with (37)),
y 1 = y 0 + h γ ^ 0 .
We observe that, by introducing the matrices (see (30))
P r : = P r ( c 1 ) P r ( c k ) R k × r , I s : = I s ( c 1 ) I s ( c k ) R k × s , Ω = b 1 b k ,
one has, with reference to (31),
I s = P s + 1 X ^ s , P s Ω P s = I s , P s Ω I s = X s .
Further, by also introducing the vector   e = ( 1 , , 1 ) R k , one obtains that (80) can be rewritten as
γ ^ s = P s Ω I m f ( e y 0 + h I s I m γ ^ s ) .
By setting
Y Y 1 Y k : = e y 0 + h I s I m γ ^ s ,
the vector of the stages of the corresponding Runge–Kutta method, from (84) and (85) one obtains the stage equation
Y = e y 0 + h I s P s Ω I m f ( Y ) ,
with the new approximation
y 1 = y 0 + h i = 1 k b i f ( Y i ) .
Summing all up, (84)–(87) define the k-stage Runge–Kutta method:
c I s P s Ω b , b = ( b 1 , , b k ) , c = ( c 1 , , c k ) ,
named HBVM ( k , s ) [4,29,30]. It is worth mentioning that the Butcher matrix of the method, with reference to the coefficients of the csRK (33), is given by:
I s P s Ω b j a s ( c i , c j ) R k × k .
In particular, when k = s , one obtains the s-stage Gauss collocation method. However, the use of values k > s (and even k s ) is useful, in view of deriving energy-conserving methods for Hamiltonian systems [4,5,29,30]. In fact, it can be proved (see, e.g., [5] Corollary 3) that a HBVM ( k , s ) method, with k s :
  • Has order 2 s ;
  • Is energy-conserving, for all polynomial Hamiltonians of degree not larger that 2 k / s ;
  • For general (and suitably regular Hamiltonians), the Hamiltonian error is O ( h 2 k + 1 ) . In this case, however, by using a value of k large enough so that the Hamiltonian error falls within the round-off error level, the method turns out to be practically energy-conserving.
Remark 8.
As is clear, the formulation (80) and (81) is computationally more effective than (86) and (87), having the former (block) dimension s, independently of k, which is the dimension of the latter formulation. As observed in [32], this allows the use of relatively large values of k, without increasing too much the computational cost.
Similar arguments can be repeated in the case of the polynomial approximations for problems (38), (39), and (74): here we sketch only those concerning the csRKN (66), providing the correct implementation for a HBVM ( k , s ) method for the special second-order problem (38). By formally using the same approximate Fourier coefficients (79) in place of (67), one has that (68) is replaced by the following discrete counterpart,
γ ^ s = j = 1 k b j P s ( c j ) I m f ( y 0 + c j h y ˙ 0 + h 2 I s ( c j ) X s I m γ ^ s )
with the new approximations (compare with (69)) given by
y ˙ 1 = y ˙ 0 + h γ ^ 0 , y 1 = y 0 + h y ˙ 0 + h 2 ξ 0 γ ^ 0 ξ 1 γ ^ 1 .
Similarly as in the first order case, (89) can be rewritten as
γ ^ s = P s Ω I m f ( e y 0 + h c y ˙ 0 + h 2 I s X s I m γ ^ s ) ,
where c is the vector of the abscissae. Again, the vector
Y Y 1 Y k : = e y 0 + h c y ˙ 0 + h 2 I s X s I m γ ^ s
is the stage vector of a k-stage RKN method. In particular, from (91) and (92) one obtains
Y = e y 0 + h c y ˙ 0 + h 2 I s X s P s Ω I m f ( Y ) ,
and some algebra shows that the new approximations are given by
y ˙ 1 = y ˙ 0 + h i = 1 k b i f ( Y i ) , y 1 = y 0 + h y ˙ 0 + h 2 i = 1 k b i ( 1 c i ) f ( Y i ) .
Consequently, we are speaking about the following k-stages RKN method:
c I s X s P s Ω [ b ( e c ) ] b ,
where  ∘  denotes the Hadamard, i.e., componentwise, product. The Butcher tableau (95) defines the RKN formulation of a HBVM ( k , s ) method [21,29,32]. It is worth mentioning that, with reference to (60) and (65), the Butcher matrix can be written in terms of the corresponding csRKN method (66) as follows:
I s X s P s Ω b j a ¯ s ( c i , c j ) b j 0 1 a s ( c i , τ ) a s ( τ , c j ) d τ R k × k .
Remark 9.
As observed in Remark 8, also in this case, in consideration that k s , the formulation (90) and (91) of the method is much more efficient than the usual one given by (93) and (94), in view of the use of values of k s , as in the case of separable Hamiltonian problems, as the numerical example below will show.
It must be emphasized that:
  • Very efficient iterative procedures exist for solving the discrete problems (84) and (91) generated by a HBVM method (see, e.g., [32]). As matter of fact, we mention that the state-of-art Matlab code hbvm is available at the website of the book [29];
  • When HBVMs are used as spectral methods in time, i.e., choosing values of s and k > s so that no further accuracy improvement can be obtained, for the considered finite-precision arithmetic and timestep h used [22,31,35], then there is no practical difference between the discrete methods and their continuous-stage counterparts.

Numerical Tests

We now report a few numerical examples comparing energy-conserving methods with respect to symplectic methods. For this purpose, we used the same Matlab code hbvm mentioned above: in fact, for k = s , it also provides a very efficient implementation of symplectic s-stage Gauss collocation methods. All tests have been executed on a 3 GHz Intel Xeon W10 core computer with 64GB of memory, using Matlab© R2020b.
Symplectic methods usually perform very well when solving Hamiltonian problems (see, e.g., the monographs [42,43,44,45] or [46,47,48,49]), but in some circumstances (e.g., when relatively large timesteps are used) energy-conserving methods may be more reliable. As an example of this fact, we consider the separable Hamiltonian problem described by the following polynomial Hamiltonian:
H ( q , p ) = 1 2 p 1 2 + p 2 2 + 1 2 β q 1 2 + q 2 2 + α ( q 1 γ q 2 ) 2 n , q , p R 2 .
In particular, we consider the following values of the parameters:
α = β = n = 5 , γ = 2.48 .
At first, we numerically integrated the trajectory starting at q ( 0 ) = ( 1 , 1 ) , p ( 0 ) = ( 0 , 0 ) , with a timestep h = 5 · 10 3 , on the interval [ 0 , 250 ] , by using the symplectic 2-stage Gauss method (i.e., HBVM(2,2)), and the HBVM(10,2) method, which is energy-conserving: in both cases, the methods were used as RKN methods. A reference solution has been computed by using a spectral HBVM [22,31,35]. The two methods required comparable execution times (8.5 s vs. 12.8 s, respectively), and the obtained numerical results are comparable as well, as is shown in Figure 1 where we have plotted: the Hamiltonian error; the phase–space plot in the q 1 p 1 plane; the phase–space plot in the q 2 p 2 plane. The Gauss method, though not energy-conserving, exhibits a quasi-conservation of the numerical Hamiltonian. Subsequently, we integrated the same orbit with a timestep h = 10 2 . Again, the two methods required comparable execution times (6.5 s vs. 8.7 s, respectively) but the obtained numerical results are quite different, as is shown in Figure 2, where we plot, as before: the Hamiltonian error; the phase–space plot in the q 1 p 1 plane; the phase–space plot in the q 2 p 2 plane. As one may see, now the energy-conserving method is qualitatively more accurate than the symplectic 2-stage Gauss method, which does not exhibit anymore the quasi-conservation of the energy.

5. Conclusions

In this paper, we have reviewed the basic facts and reported some new insights on the energy-conserving class of Runge–Kutta methods named HBVMs. Methods have been here studied within the framework of continuous-stage Runge–Kutta methods. The extension to second-order problems has been also recalled, providing a natural continuous-stage Runge–Kutta-Nyström formulation of the methods. Further, also the extension to general kth-order problems has been sketched out. The relation with the fully discrete methods has been also recalled, thereby showing the usefulness of using such a framework to study the fully discrete methods.

Author Contributions

All authors have read and agreed to the published version of the manuscript.

Funding

The authors acknowledge the financial support from the mrSIR project [50].

Data Availability Statement

Not applicable.

Acknowledgments

The authors wish to thank the referees for the careful reading of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Butcher, J.C. An algebraic theory of integration methods. Math. Comp. 1972, 26, 79–106. [Google Scholar] [CrossRef]
  2. Butcher, J.C. The Numerical Analysis of Ordinary Differential Equations: Runge–Kutta and General Linear Methods; John Wiley & Sons: Chichester, UK, 1987. [Google Scholar]
  3. Butcher, J.C.; Wanner, G. Runge–Kutta methods: Some historical notes. Appl. Numer. Math. 1996, 22, 113–151. [Google Scholar] [CrossRef]
  4. Brugnano, L.; Iavernaro, F.; Trigiante, D. Hamiltonian Boundary Value Methods (Energy Preserving Discrete Line Integral Methods). JNAIAM J. Numer. Anal. Ind. Appl. Math. 2010, 5, 17–37. [Google Scholar]
  5. Brugnano, L.; Iavernaro, F.; Trigiante, D. A simple framework for the derivation and analysis of effective one-step methods for ODEs. Appl. Math. Comput. 2012, 218, 8475–8485. [Google Scholar] [CrossRef] [Green Version]
  6. Hairer, E. Energy-preserving variant of collocation methods. JNAIAM J. Numer. Anal. Ind. Appl. Math. 2010, 5, 73–84. [Google Scholar]
  7. Li, J.; Gao, Y. Energy-preserving trigonometrically fitted continuous stage Runge–Kutta-Nyström methods for oscillatory Hamiltonian systems. Numer. Algorithms 2019, 81, 1379–1401. [Google Scholar] [CrossRef]
  8. Miyatake, Y. An energy-preserving exponentially-fitted continuous stage Runge–Kutta method for Hamiltonian systems. BIT 2014, 54, 777–799. [Google Scholar] [CrossRef]
  9. Miyatake, Y.; Butcher, J.C. A characterization of energy preserving methods and the construction of parallel integrators for Hamiltonian systems. SIAM J. Numer. Anal. 2016, 54, 1993–2013. [Google Scholar] [CrossRef] [Green Version]
  10. Quispel, G.R.W.; McLaren, D.I. A new class of energy-preserving numerical integration methods. J. Phys. A Math. Theor. 2008, 41, 045206. [Google Scholar] [CrossRef]
  11. Tang, Q.; Chen, C.M. Continuous finite element methods for Hamiltonian systems. Appl. Math. Mech. 2007, 28, 1071–1080. [Google Scholar] [CrossRef]
  12. Tang, W. A note on continuous-stage Runge–Kutta methods. Appl. Math. Comput. 2018, 339, 231–241. [Google Scholar] [CrossRef] [Green Version]
  13. Tang, W.; Lang, G.; Luo, X. Construction of symplectic (partitioned) Runge–Kutta methods with continuous stage. Appl. Math. Comput. 2016, 286, 279–287. [Google Scholar] [CrossRef] [Green Version]
  14. Tang, W.; Sun, Y. Time finite element methods: A unified framework for numerical discretizations of ODEs. Appl. Math. Comput. 2012, 219, 2158–2179. [Google Scholar] [CrossRef]
  15. Tang, W.; Sun, Y. Construction of Runge–Kutta type methods for solving ordinary differential equations. Appl. Math. Comput. 2014, 234, 179–191. [Google Scholar] [CrossRef]
  16. Tang, W.; Zhang, J. Symmetric integrators based on continuous-stage Runge–Kutta-Nyström methods for reversible systems. Appl. Math. Comput. 2019, 361, 1–12. [Google Scholar] [CrossRef] [Green Version]
  17. Xin, X.; Qin, W.; Ding, X. Continuous stage stochastic Runge–Kutta methods. Adv. Differ. Equ. 2021, 61, 1–22. [Google Scholar] [CrossRef]
  18. Wang, B.; Wu, X.; Fang, Y. A continuous-stage modified Leap-frog scheme for high-dimensional semi-linear Hamiltonian wave equations. Numer. Math. Theory Methods Appl. 2020, 13, 814–844. [Google Scholar] [CrossRef]
  19. Yamamoto, Y. On eigenvalues of a matrix arising in energy-preserving/dissipative continuous-stage Runge–Kutta methods. Spec. Matrices 2022, 10, 34–39. [Google Scholar] [CrossRef]
  20. Amodio, P.; Brugnano, L.; Iavernaro, F. Energy-conserving methods for Hamiltonian Boundary Value Problems and applications in astrodynamics. Adv. Comput. Math. 2015, 41, 881–905. [Google Scholar] [CrossRef] [Green Version]
  21. Amodio, P.; Brugnano, L.; Iavernaro, F. A note on the continuous-stage Runge–Kutta-(Nyström) formulation of Hamiltonian Boundary Value Methods (HBVMs). Appl. Math. Comput. 2019, 363, 124634. [Google Scholar] [CrossRef] [Green Version]
  22. Amodio, P.; Brugnano, L.; Iavernaro, F. Analysis of Spectral Hamiltonian Boundary Value Methods (SHBVMs) for the numerical solution of ODE problems. Numer. Algorithms 2020, 83, 1489–1508. [Google Scholar] [CrossRef] [Green Version]
  23. Amodio, P.; Brugnano, L.; Iavernaro, F. Arbitrarily high-order energy-conserving methods for Poisson problems. Numer. Algorithms 2022. [Google Scholar] [CrossRef]
  24. Barletti, L.; Brugnano, L.; Tang, Y.; Zhu, B. Spectrally accurate space-time solution of Manakov systems. J. Comput. Appl. Math. 2020, 377, 112918. [Google Scholar] [CrossRef]
  25. Brugnano, L.; Frasca-Caccia, G.; Iavernaro, F. Line Integral Solution of Hamiltonian PDEs. Mathematics 2019, 7, 275. [Google Scholar] [CrossRef] [Green Version]
  26. Brugnano, L.; Gurioli, G.; Iavernaro, F.; Weinmüller, E.B. Line integral solution of Hamiltonian systems with holonomic constraints. Appl. Numer. Math. 2018, 127, 56–77. [Google Scholar] [CrossRef] [Green Version]
  27. Brugnano, L.; Gurioli, G.; Sun, Y. Energy-conserving Hamiltonian Boundary Value Methods for the numerical solution of the Korteweg-de Vries equation. J. Comput. Appl. Math. 2019, 351, 117–135. [Google Scholar] [CrossRef] [Green Version]
  28. Brugnano, L.; Gurioli, G.; Zhang, C. Spectrally accurate energy-preserving methods for the numerical solution of the “Good” Boussinesq equation. Numer. Methods Partial. Differ. Equ. 2019, 35, 1343–1362. [Google Scholar] [CrossRef] [Green Version]
  29. Brugnano, L.; Iavernaro, F. Line Integral Methods for Conservative Problems; Chapman & Hall/CRC: Boca Raton, FL, USA, 2016; Available online: http://web.math.unifi.it/users/brugnano/LIMbook/ (accessed on 15 April 2022).
  30. Brugnano, L.; Iavernaro, F. Line Integral Solution of Differential Problems. Axioms 2018, 7, 36. [Google Scholar] [CrossRef] [Green Version]
  31. Brugnano, L.; Iavernaro, F.; Montijano, J.I.; Rández, L. Spectrally accurate space-time solution of Hamiltonian PDEs. Numer. Algorithms 2019, 81, 1183–1202. [Google Scholar] [CrossRef] [Green Version]
  32. Brugnano, L.; Iavernaro, F.; Trigiante, D. A note on the efficient implementation of Hamiltonian BVMs. J. Comput. Appl. Math. 2011, 236, 375–383. [Google Scholar] [CrossRef]
  33. Brugnano, L.; Iavernaro, F.; Trigiante, D. A two-step, fourth-order method with energy preserving properties. Comput. Phys. Commun. 2012, 183, 1860–1868. [Google Scholar] [CrossRef] [Green Version]
  34. Brugnano, L.; Iavernaro, F.; Zhang, R. Arbitrarily high-order energy-preserving methods for simulating the gyrocenter dynamics of charged particles. J. Comput. Appl. Math. 2020, 380, 112994. [Google Scholar] [CrossRef]
  35. Brugnano, L.; Montijano, J.I.; Rández, L. On the effectiveness of spectral methods for the numerical solution of multi-frequency highly-oscillatory Hamiltonian problems. Numer. Algorithms 2019, 81, 345–376. [Google Scholar] [CrossRef] [Green Version]
  36. Brugnano, L.; Montijano, J.I.; Rández, L. High-order energy-conserving Line Integral Methods for charged particle dynamics. J. Comput. Phys. 2019, 396, 209–227. [Google Scholar] [CrossRef] [Green Version]
  37. Brugnano, L.; Sun, Y. Multiple invariants conserving Runge–Kutta type methods for Hamiltonian problems. Numer. Algorithms 2014, 65, 611–632. [Google Scholar] [CrossRef] [Green Version]
  38. Tang, W.; Sun, Y.; Zhang, J. High order symplectic integrators based on continuous-stage Runge–Kutta-Nyström methods. Appl. Math. Comput. 2019, 361, 670–679. [Google Scholar] [CrossRef] [Green Version]
  39. Tang, W.; Zhang, J. Symplecticity-preserving continuous stage Runge–Kutta-Nyström methods. Appl. Math. Comput. 2018, 323, 204–219. [Google Scholar] [CrossRef] [Green Version]
  40. Hairer, E.; Wanner, G. Solving Ordinary Differential Equations II, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2002. [Google Scholar]
  41. Dahlquist, G.; Björk, Å. Numerical Methods in Scientific Computing; SIAM: Philadelphia, PA, USA, 2008; Volume I. [Google Scholar]
  42. Blanes, S.; Casas, F. A Concise Introduction to Geometric Numerical Integration; CRC Press: Boca Raton, FL, USA, 2016. [Google Scholar]
  43. Hairer, E.; Lubich, C.; Wanner, G. Geometric Numerical Integration; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  44. Leimkuhler, B.; Reich, S. Simulating Hamiltonian Dynamics; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  45. Sanz-Serna, J.M.; Calvo, M.P. Numerical Hamiltonian Problems; Chapman & Hall: London, UK, 1994. [Google Scholar]
  46. Hairer, E.; Lubich, C. Long-term analysis of the Störmer-Verlet method for Hamiltonian systems with a solution-dependent high frequency. Numer. Math. 2016, 134, 119–138. [Google Scholar] [CrossRef]
  47. McLachlan, R.I. Tuning symplectic integrators is easy and worthwhile. Commun. Comput. Phys. 2022, 31, 987–996. [Google Scholar] [CrossRef]
  48. Sanz Serna, J.M. Symplectic Runge–Kutta schemes for adjoint equations, automatic differentiation, optimal control, and more. SIAM Rev. 2016, 58, 3–33. [Google Scholar] [CrossRef] [Green Version]
  49. Wang, B.; Wu, X. A long-term numerical energy-preserving analysis of symmetric and/or symplectic extended RKN integrators for efficiently solving highly oscillatory Hamiltonian systems. Bit Numer. Math. 2021, 61, 977–1004. [Google Scholar] [CrossRef]
  50. Available online: https://www.mrsir.it/en/about-us/ (accessed on 15 April 2022).
Figure 1. Numerical solution of the problem defined by (96) and (97) by using the HBVM(10,2) method (left-plots) and the 2-stage Gauss method, i.e., HBVM(2,2) (central plots), with a timestep h = 5 · 10 3 . The plots on the right are relative to the reference solution. On the upper line are the Hamiltonian errors; on the central line are the phase plots in the q 1 p 1 plane; on the bottom line are the phase plots in the q 2 p 2 plane.
Figure 1. Numerical solution of the problem defined by (96) and (97) by using the HBVM(10,2) method (left-plots) and the 2-stage Gauss method, i.e., HBVM(2,2) (central plots), with a timestep h = 5 · 10 3 . The plots on the right are relative to the reference solution. On the upper line are the Hamiltonian errors; on the central line are the phase plots in the q 1 p 1 plane; on the bottom line are the phase plots in the q 2 p 2 plane.
Axioms 11 00192 g001
Figure 2. Numerical solution of the problem defined by (96) and (97) by using the HBVM(10,2) method (left-plots) and the 2-stage Gauss method, i.e., HBVM(2,2) (central plots), with a timestep h = 10 2 . The plots on the right are relative to the reference solution. On the upper line are the Hamiltonian errors; on the central line are the phase plots in the q 1 p 1 plane; on the bottom line are the phase plots in the q 2 p 2 plane.
Figure 2. Numerical solution of the problem defined by (96) and (97) by using the HBVM(10,2) method (left-plots) and the 2-stage Gauss method, i.e., HBVM(2,2) (central plots), with a timestep h = 10 2 . The plots on the right are relative to the reference solution. On the upper line are the Hamiltonian errors; on the central line are the phase plots in the q 1 p 1 plane; on the bottom line are the phase plots in the q 2 p 2 plane.
Axioms 11 00192 g002
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Amodio, P.; Brugnano, L.; Iavernaro, F. Continuous-Stage Runge–Kutta Approximation to Differential Problems. Axioms 2022, 11, 192. https://doi.org/10.3390/axioms11050192

AMA Style

Amodio P, Brugnano L, Iavernaro F. Continuous-Stage Runge–Kutta Approximation to Differential Problems. Axioms. 2022; 11(5):192. https://doi.org/10.3390/axioms11050192

Chicago/Turabian Style

Amodio, Pierluigi, Luigi Brugnano, and Felice Iavernaro. 2022. "Continuous-Stage Runge–Kutta Approximation to Differential Problems" Axioms 11, no. 5: 192. https://doi.org/10.3390/axioms11050192

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop