Principal matrix solutions and variation of parameters for a Volterra integro-differential equation and its adjoint

We define the principal matrix solution Z(t,s) of the linear Volterra vector integro-differential equation x 0 (t) = A(t)x(t) + Z t s B(t,u)x(u)du


B(t, u)x(u) du
in the same way that it is defined for x = A(t)x and prove that it is the unique matrix solution of ∂ ∂t Z(t, s) = A(t)Z(t, s) + t s B(t, u)Z(u, s) du, Z(s, s) = I.
Furthermore, we prove that the solution of is unique and given by the variation of parameters formula x(t) = Z(t, τ )x 0 + t τ Z(t, s)f (s) ds.
We also define the principal matrix solution R(t, s) of the adjoint equation r (s) = −r(s)A(s) − t s r(u)B(u, s) du and prove that it is identical to Grossman and Miller's resolvent, which is the unique matrix solution of ∂ ∂s R(t, s) = −R(t, s)A(s) − t s R(t, u)B(u, s) du, R(t, t) = I.
Finally, we prove that despite the difference in their definitions R(t, s) and Z(t, s) are in fact identical.

Introduction: Resolvent vs Principal Matrix Solution
The variation of parameters formula (1.1) gives the unique solution of the linear nonhomogeneous Volterra vector integro-differential equation satisfying the initial condition x(0) = x 0 .It has been around for at least 36 years: Grossman and Miller defined the matrix function R(t, s), called the resolvent, and used it to derive (1.1) in 1970 in their classic paper [12].They formally defined R(t, s) by where I is the identity matrix and (1.4) Ψ(t, s) = A(t) + t s B(t, v) dv.
They proved that R(t, s) exists and is continuous for 0 ≤ s ≤ t and that it satisfies Despite the prominence of the resolvent R(t, s) in the literature and its indispensability, its definition (1.3) is not as conceptually simple as one would like.However, there is a more fundamental way to look at R(t, s) and that is from the standpoint of linear systems of ODEs.In 1979 in my dissertation [ that is given by Hale in [14, p. 80]: Z(t, s) is a matrix solution of (1.6) with columns that are linearly independent such that Z(s, s) = I.
Using Z(t, s) instead of R(t, s), the variation of parameters formula 2) is a natural extension of the variation of parameters formula for the nonhomogeneous vector differential equation The principal matrix version of the resolvent equation (1.5), namely, has been instrumental in a number of papers for obtaining results that might not have otherwise been obtained with (1.5) alone.
The principal matrix solution Z(t, s), the variation of parameters formula (1.7), and the principal matrix equation (1.8) are used and cited in papers by Becker et al. [3], Burton [6,7], Eloe et al. [11], Islam and Raffoul [17], Raffoul [19], Hino and Murakami [20,21], Zhang [22], and in the monographs [4,Ch. 7] and [8, Ch. 5] by T. A. Burton.Burton synopsizes some of the results from [1] in Section 7.1 of [4] and perceptively contrasts the difference in the definitions of the principal matrix solution Z(t, s) and Grossman and Miller's resolvent R(t, s).However, complete proofs of these results and concomitant definitions and applications have never been published-for that reason we do so now in Sections 2-5 of this paper.
Not found in [1] is an alternative to Grossman and Miller's definition of R(t, s).It is this: R(t, s) is the transpose of the principal matrix solution of the adjoint equation However, here we present a proof that avoids such references; instead we change the initial value problem consisting of (2.1) and an initial condition x(s) = x 0 to an equivalent integral equation from which we construct a contraction mapping with a unique fixed point-which will be the unique solution.
The space C[a, b] with the metric ρ r is complete, which we denote by (C[a, b], ρ r ).Definition 2.1.Let x 0 ∈ R n .A solution of (2.1) on the interval [s, T ), where s < T ≤ ∞, with the initial value x 0 at t = s is a differentiable function x : [s, T ) → R n that satisfies (2.1) on (s, T ) and the initial condition x(s) = x 0 .Theorem 2.2.For a given x 0 ∈ R n , there is a unique solution of Proof.We begin by inverting (2.1) to obtain an equivalent integral equation from which we will be able to define a contraction mapping.Integrating (2.1) from s to t and replacing x(s) with x 0 , we get Interchanging the order of integration in the iterated integral, we have This shows that a differentiable function x(t) that satisfies (2.1) and the initial condition x(s) = x 0 also satisfies the integral equation (2.4).For such a function the integrand B(v, u)x(u) is continuous, which justifies the interchange in the order of integration.
Conversely, if x(t) is a continuous function that satisfies (2.4), then the integrands in (2.4) are continuous; as a result, x(t) is also differentiable.Differentiating (2.4) with the aid of Leibniz's rule, we find that x(t) also satisfies (2.1).Setting t = s in (2.4), we have x(s) = x 0 .
The point is that Theorem 2.2 is equivalent to the statement that there is a unique continuous function x that satisfies (2.4) on [s, ∞) for a given x 0 ∈ R n .So proving the latter would prove Theorem 2.2.In other words, we need to prove that a unique continuous function x exists such that x(t) = (P x)(t), where (P x)(t) is the right-hand side of (2.4).To this end, choose any T > s and let For a fixed real number r, whose value will be addressed shortly, let (C[s, T ], ρ r ) be the complete metric space described earlier.Then Now we can show that P is a contraction mapping on Since A(t) and B(t, u) are continuous for s ≤ u ≤ t ≤ T , there is an r > 1 such that For such an r, Thus, From this it follows that ρ r (P φ, P η) ≤ r − 1 r ρ r (φ, η).

Joint Continuity
For a given x 0 ∈ R n , the homogeneous equation has a unique solution x s satisfying the initial condition x s (s) = x 0 by Theorem 2.2 (with f (t) ≡ 0).Equivalently, by (2.4), x s is the unique continuous solution of Up to now the value of s has been fixed.But with that restriction removed, the totality of values x s (t) defines a function, x say, on the set Ω := { (t, s) : 0 ≤ s ≤ t < ∞ } whose value at (t 1 , s 1 ) ∈ Ω is the value of the solution x s 1 at t = t 1 .
Definition 3.1.For a given x 0 ∈ R n , let x denote the function with domain Ω whose value at (t, s) is where x s is the unique solution of (3.1) on [s, ∞) satisfying the initial condition x s (s) = x 0 .
Since x(t, s) is continuous in t for a fixed s, it is natural to ask if it is also continuous in s for a fixed t-and if so, is it jointly continuous in t and s?The next theorem answers both of these in the affirmative.This will play an essential role in the proof of the variation of parameters formula for (2.1) that is given in Section 5. Proof.First extend the domain Ω of the function x to the entire plane by defining x(t, s) = x 0 for s > t.For any T > 0, consider x(t, s) on [0, T ] × [0, T ].We will prove x(t, s) is continuous in s uniformly for t ∈ [0, T ], which means that for every > 0, there exists a δ > 0 such that |s 1 − s 2 | < δ implies that ( where k is a constant chosen so that With the aid of (3.8) we now prove (3.5).For definiteness, suppose  for all s 1 , s 2 ∈ [0, T ] and t ∈ [0, T ], which implies (3.5).Therefore,

Principal Matrix Solution
For a fixed s ≥ 0, let S denote the set of all solutions of (3.1) on the interval [s, ∞) that correspond to initial vectors.Let x(t, s) and x(t, s) be two such solutions satisfying the initial conditions x(s, s) = x 0 and x(s, s) = x 1 , respectively.Linearity of (3.1) implies the principle of superposition, namely, that the linear combination c 1 x(t, s) + c 2 x(t, s) is a solution of (3.1) on [s, ∞) for any c 1 , c 2 ∈ R. Consequently, the set S is a vector space.Note that S comprises all solutions that have their initial values specified at t = s, but not those for which an initial function is specified on an initial interval [s, t 0 ] for some t 0 > s.
Theorem 4.1.For a fixed s ∈ [0, ∞), let S be the set of all solutions of (3.1) on the interval [s, ∞) corresponding to initial vectors.Then S is an n-dimensional vector space.
Proof.We have already established that S is a vector space.To complete the proof, we must find n linearly independent solutions spanning S. To this end, let e 1 , . . ., e n be the standard basis for R n , where e i is the vector whose ith component is 1 and whose other components are 0. By Theorem 2.2, there are n unique solutions x i (t, s) of (3.1) on [s, ∞) with x i (s, s) = e i (i = 1, . . ., n).By the usual argument, these solutions are linearly independent.
To show they span S, choose any x(t, s) ∈ S. Suppose its value at t = s is the vector x 0 .Let ξ 1 , . . ., ξ n be the unique scalars such that x 0 = ξ 1 e 1 + • • • + ξ n e n .By the principle of superposition, the linear combination (4.1) is a solution of (3.1).Since its value at t = s is x 0 , the uniqueness part of Theorem 2.2 implies Hence, the n solutions x 1 (t, s), . . ., x n (t, s) span S.This and their linear independence make them a basis for S.
Definition 4.2.The principal matrix solution of (3.1) is the n × n matrix function Z(t, s) defined by (4.3).In other words, Z(t, s) is a matrix with n columns that are linearly independent solutions of (3.1) and whose value at t = s is the identity matrix I.
Theorem 3.2 implies that each of the columns x i (t, s) of Z(t, s) are continuous for 0 ≤ s ≤ t < ∞.Consequently, we have the following.Theorem 4.4.Z(t, s), the principal matrix solution of equation Since the ith column of Z(t, s) is the unique solution of (3.1) whose value at t = s is e i , Z(t, s) is the unique matrix solution of the initial value problem

Variation of Parameters Formula
Let X(t) be any fundamental matrix solution of the homogeneous differential equation (5.1) By definition, the columns of a fundamental matrix solution X(t) are linearly independent solutions of (5.1).So for c ∈ R n , x(t) = X(t)c a solution of (5.1) by the principle of superposition.If x(τ ) = x 0 , then X(τ )c = x 0 .Since X(τ ) is nonsingular (cf.[9, p. 62]), the unique solution x(t) of (5.1) satisfying Now compare (5.2) to the unique solution of the nonhomogeneous equation (5.3) x (t) = A(t)x(t) + f (t) satisfying x(τ ) = x 0 .The method of variation of parameters applied to (5.3) (cf.[9, p. 65]) yields the following well-known formula for the solution Of course, (5.4) reduces to (5.2) if f ≡ 0. As for the integro-differential equation (3.1), the counterpart of (5.2) is (4.4), which is stated next as a lemma.
Lemma 5.1 extends a classical result for the homogeneous differential equation (5.1) to the homogeneous integro-differential equation (3.1).This suggests that a variation of parameters formula similar to (5.7) may also hold for the nonhomogeneous integro-differential equation (2.1).
The essential element in the derivation of the variation of parameters formula (5.4) is the nonsingularity of X(t) for each t.If the same were true of the principal matrix solution Z(t, s) of (3.1), then a variation of parameters formula could be derived for (2.1) as well.In fact, as Theorem 5.2 shows, there are examples of (3.1) other than (5.1) for which det Z(t, s) is never zero.It follows that the principal solution x(t, s) of (5.8) (i.e., the solution whose value at t = s is 1) is always positive.In our notation, Z(t, s) is the 1 × 1 matrix [x(t, s)] and so det Z(t, s) = x(t, s) > 0 for all t ≥ s ≥ 0.
EJQTDE, 2006 No. 14, p. 13 However, unlike the differential equation (5.1), the principal matrix solution of the integro-differential equation (3.1) may be singular at points as the next theorem of Burton shows.If there exists a t 1 > 0 such that (5.12) as t → ∞, then there exists a t 2 > 0 such that x(t 2 ) = 0.
Theorem 5.3 establishes that the determinant of the principal matrix solution Z(t, s) of (3.1) may vanish.Consequently, unlike the nonhomogeneous differential equation (5.3), we cannot derive a formula like (5.7) in general for the nonhomogeneous integro-differential equation (2.1) by directly applying the method of variation of parameters to it.Nevertheless, in the next proof we use uniqueness of solutions to verify that the function given by the variation of parameters formula (5.14) always satisfies (5.13), regardless of the values of det Z(t, s).x (t) = A(t)x(t) where Z(t, s) is the principal matrix solution of which is (5.17).

The Adjoint Equation
The differential equation where s ∈ [0, t].
Let C[a, b] be the vector space of continuous functions φ : [a, b] → R n .For a fixed real number r, let |•| r be the norm on C[a, b] that is defined as follows: for φ ∈ C[a, b],