Qualitative aspects of a Volterra integro-dynamic system on time scales

This paper deals with the resolvent, asymptotic stability and boundedness of the solution of time-varying Volterra integro-dynamic system on time scales in which the coefficient matrix is not necessarily stable. We generalize to a time scale some known properties about asymptotic behavior and boundedness from the continuous case. Some new results for the discrete case are obtained.


Introduction and preliminaries
Basic qualitative results about Volterra integro-differential equations have been studied by many authors.Notable exceptions that have dispensed with the stability condition on the coefficient matrix have been the works of Burton [4,5], Corduneanu [7], Choi and Koo [8], Mahfoud [21], Medina [22], Rao and Srinivas [24], among others.In [4], the author investigates the stability and boundedness of the solution involving the anti-derivatives of the kernel.Sufficient conditions for uniformly bounded solution are developed in [21].In [24], the asymptotic behavior of the solution of a Volterra integro-differential equation is discussed in which the coefficient matrix is not necessarily stable.The resolvent of a Volterra integro-differential equation was first investigated by Grossman and Miller in [14].In the discrete case resolvent equation was obtained by Elaydi in [10].
The area of dynamic equations on time scales is a new, modern and progressive component of applied analysis that acts as the framework to effectively describe processes that feature both continuous and discrete elements (see e.g.[2,18,19,20,25]).Created by Hilger in 1988 [15] and developed by Bohner and Peterson [6].Volterra type equations (both integral and intgrodynamic) on time scales become a new field of interest.In [16], Kulik and Tisdell obtained basic qualitative and quantitative results for Volterra integral equations.Furthermore, in [17] Karpuz studied the existence and the uniqueness of solutions to generalized Volterra integral equations.
In a very recent paper [1], Adivar introduces the principal matrix solutions and variation of parameters for Volterra integro-dynamic equations.Motivated by the interesting nature of this problem, an attempt has been made to study some stability and boundedness properties of the following system    x ∆ (t) = A(t)x(t) + t t 0 K(t, s)x(s)∆s + F (t), t ∈ T 0 = [t 0 , ∞) x(t 0 ) = x 0 , (1.1) where 0 ≤ t 0 ∈ T k is fixed, A (not necessarily stable) is an n × n matrix function and F is an n−vector function, which are both continuous on T 0 , K is an n × n matrix function, which is continuous on Ω := {(t, s) ∈ T 0 × T 0 : t 0 ≤ s ≤ t < ∞}.
We will briefly recall some basic definitions and facts from the times scales calculus that we will use in the sequel, for readers convenience.
A time scale T is a closed subset of R. It follows that the jump operators σ, ρ : T → T defined by σ(t) = inf{s ∈ T : s > t} and ρ(t) = sup{s ∈ T : s < t} (supplemented by inf ∅ := sup T and sup ∅ := inf T) are well defined.The point t ∈ T is left-dense, left-scattered, right-dense, right-scattered if ρ(t) = EJQTDE, 2013 No. 5, p. 2 t, ρ(t) < t, σ(t) = t, σ(t) > t, respectively.If T has a right-scattered minimum m, define T k := T − {m}; otherwise, set T k = T.If T has a left-scattered maximum M, define T k := T − {M}; otherwise, set T k = T. µ(t) = σ(t) − t is called the graininess function.The notations [a, b], [a, b), and so on, will denote time scales intervals such as [a, b] := {t ∈ T; a ≤ t ≤ b}, where a, b ∈ T. Throughout this article we assume that supT = ∞ and the graininess function µ(t) is bounded.Definition 1.1 Let X be a Banach space.The function f : T → X is called rd-continuous provided it is continuous at each right-dense point and has a left-sided limit at each point.
The set of all rd-continuous functions f : T → X is denoted by C rd (T, X).Definition 1.2 For t ∈ T k , let the ∆-derivative of f at t, denoted f ∆ (t), be the number (provided it exists), such that for all ε > 0 there exists a neighborhood U of t such that The set of all functions f : T → X that are differentiable on T and its ∆-derivative f ∆ (t) ∈ C rd (T, X) is denoted by C 1 rd (T, X).
From the definition of the operator σ it follows that where q Z = {q k : k ∈ Z} ∪ {0} and q > 1. Hence the ∆-derivative f ∆ (t) turns into ordinary derivative f ′ (t) if T = R and it becomes the forward EJQTDE, 2013 No. 5, p. 3 difference operator ∆f (t) = f (t + 1) − f (t) whenever T = Z.For the time scale T = q Z we have f ∆ (t) = D q f (t), where D q f (t) = f (qt) − f (t) (q − 1)t .Thus, one can consider the differential, difference and q-difference equations as special cases of the dynamic equations on time scale.
A function p : T → R is said to be regressive (respectively positively regressive) if 1 + µ(t)p(t) = 0 (respectively 1 + µ(t)p(t) > 0) for all t ∈ T k .The space of all regressive functions from T to R is denoted by R(T, R) (respectively R + (T, R)).The space of all rd-continuous and regressive functions from T to R is denoted by C rd R(T, R).The generalized exponential function e p is defined as the unique solution y(t) = e p (t, a) of the initial value problem y ∆ = p(t)y, y(a) = 1, where p is regressive function.An explicit formula for e p (t, a), is given by For more details, see [6].Clearly, e p (t, s) never vanishes.The following results will be used throughout this work.
It is easy to verify that the above result holds for f ∈ C rd (T × T, R n ).Also we consider the discrete time scale (see [9,23]) Let t ∈ T r (q,h) and f : T r (q,h) → R. Then the delta (q, h)-derivative of f at t is For z = −1/(q ′ t + h), where q ′ = q − 1, the exponential function has form , for all t, s ∈ T r (q,h) .
EJQTDE, 2013 No. 5, p. 5 Then, from (1.1), we obtain the following discrete variant where A is an n × n matrix function, F is an n−vector function on T r (q,h) and K is an n × n matrix function on Ω (q,h) := {(t, s) ∈ T r (q,h) × T r (q,h) : The rest of the paper is organized as follows: Section 2 is devoted to the study of the relation between principal matrix and resolvent of (1.1).In section 3 we investigate the asymptotic behavior of the solutions of the system (1.1).The main aim in this section is to develop an equivalence system of (1.1) having a potential to give the sufficient conditions for asymptotic stability.In Section 4 we first discuss the uniform boundedness of the solutions of (1.1) by constructing a Lyapunov functional.Further results for boundedness, the uniform boundedness and stability of the solution are developed by an equivalence system of (1.1), which is constructed by using the antiderivative of the kernel.For the discrete time scale T r (q,h) we give the related results as corollaries.

Resolvent
Lemma 2.1 If A(t) and K(t, s) are the continuous functions given in equation (1.1), then where Furthermore, using Theorem 1.8, we have Hence, (2.1) and (2.2) are equivalent systems, and the proof is completed.
Theorem 2.2 Assume A and K are continuous functions given in (1.1).Then the function R(t, s), as defined in (2.2), exists on t 0 ≤ s ≤ t and is continuous in (t, s).∆ s R(t, s) exists, is continuous and satisfies the equation (2.1) on t 0 ≤ s ≤ t, for each t > t 0 .Moreover, given any vector x 0 and any continuous function F (t), equation (1.1) is equivalent to the system (2.4) EJQTDE, 2013 No. 5, p. 7 Proof.Since W (t, s) is continuous in s for each fixed t, the existence of R(t, s) on t 0 ≤ s ≤ t is trivial (see [17,Theorem 1]).From the above calculations, it follows that for each fixed t, ∆ s R(t, s) exists and satisfies (2.1) by Lemma 2.1.Since K is continuous on t 0 ≤ s ≤ t < ∞, we have and w is continuous.Application of the Gronwall inequality (see [6,Theorem 6.4]) in (2.2), yields the estimate which implies that R(t, σ(s)) is continuous.Using this fact in (2.1) it is apparent that ∆ s R(t, s) is continuous, and that ) and dominated convergence, it follows the continuity of R(t, s) in t for a fixed s.From (2.6), it is clear that R(t, s) is uniformly continuous for t 0 ≤ s ≤ t ≤ T. Hence, by [13, Theorem 5, p. 102], R(t, s) is continuous in the pair (t, s).Now let x(t) be a solution of (1.1) on an interval t 0 ≤ t ≤ T. If we take p(s) = R(t, s)x(s), then we have and by (1.1), it follows Integrating from t 0 to t we have Using Theorem 1.8, we obtain Furthermore, by using (2.1) we obtain (2.4).Moreover, if x(t) solves (2.4) on an interval t 0 ≤ t ≤ τ, then it is easy to see that x(t) solves (1.1), which completes the proof.
Consider the adjoint dynamical equation [6, Theorem 5.27], where A T is the transpose of A. Let us extend this definition to the integrodynamic equation (1.1).
Definition 2.3 For a fixed t the adjoint to (1.1) is where s ∈ [t 0 , t].
EJQTDE, 2013 No. 5, p. 9 It is easy to see by Theorem 1.8 that (2.8) is equivalent to an integral equation For the next result, we define, for a fixed t, the space of continuous function and the metric The metric space (C y 0 [t 0 , t], d 1 β ) is complete by replacing β with ⊖β in Theorem 2.4 For a fixed t ∈ T 0 such that t > t 0 and a given y 0 ∈ R n , there is a unique solution y(s) of (2.9) on the interval [t 0 , t] satisfying the condition y(t) = y 0 .
Proof.We define the mapping For a given ϕ ∈ C y 0 [t 0 , t], it follows that P ϕ is continuous on [t 0 , t] and that (P ϕ)(t) = y 0 .Thus, P : For an arbitrary pair of functions ϕ, ψ ∈ C y 0 [t 0 , t], EJQTDE, 2013 No. 5, p. 10 Since A(u) and K(u, v) are continuous for t 0 ≤ s ≤ u ≤ t, then there is β > 1 such that Then we obtain the following estimation Taking supremum over s, we have such that Z 1 (t, t) = I, on the interval [t 0 , t].Reasoning as in the proof of [1, Theorem 12], we conclude that for a given y 0 ∈ R n , the unique solution of (2.11) satisfying the condition y(t) = y 0 is Taking the transpose of (2.11) we obtain (2.15) The solution satisfying the condition y T (t) = y T 0 is the transpose of (2.14), namely, where R(t, s) := Z T 1 (t, s).Consequently, R(t, s) is the principal matrix solution of the transposed equation.As a result, Lemma 18 from [1], has the following adjoint variant.
Theorem 2.6 The solution of (2.15) on [t 0 , t] satisfying the condition y where R(t, s) is the principal matrix solution of (2.15).
Proof.Select any t > t 0 .For a given row n-vector, let y T (s) be the unique solution of (2.15) on [t 0 , t] such that y T (t) = y T 0 .Integrating (2.18) from s to t we have By the use of (2.15), we obtain Interchanging the order of integration by using Theorem 1.8, we obtain By [1,Theorem 19], the integrand is zero.Hence, On the other hand, y T (s) = y T 0 R(t, s).Therefore, by uniqueness of the solution y T (s), Now let y T 0 be the transpose of the i-th basis vector e i .Then (2.19) implies that the i-th column of R(t, s) and Z(t, s) are equal for t 0 ≤ s ≤ t.The conclusion follows as t is arbitrary.
The continuous version (T = R) of the Theorem 2.6 can be found in [3,Theorem 2.7].Now we are generalizing the idea of resolvent to discuss the asymptotic stability of (1.1) in the next section.

Asymptotic stability
Our first result in this section, is to present an equivalent system which involves an arbitrary function.A proper choice of this function has the potential to supply us a stable matrix B corresponding to A. Theorem 3.1 Let L(t, s) be an n×n continuously differentiable matrix function on Ω.Then (1.1) is equivalent to the following system L(t, σ(τ ))K(τ, s)∆τ y(s)∆s.
(3.5) which implies Z(t) ≡ 0, by the uniqueness of the solution of Volterra integral equations [16].Hence y(t) is a solution of (1.1).As a straightforward consequence of Theorem 3.1 we obtain Lemma 2.1 of [24].Also, it is to be noted that, if L(t, s) is the differentiable resolvent corresponding to the kernel K(t, s), then the equations (3.1), (3.2) and (3.3) give the usual variation of constants formula (2.4).Corollary 3.2 Let L(t, s) be a n × n matrix function on Ω (q,h) .Then (1.2) is equivalent to the following system where and EJQTDE, 2013 No. 5, p. 17 Our next result is about the estimation of the solution of (1.1).For this result we assume that matrix B commutes with its integral, so B commutes with its matrix exponential, that is, B(t)e B (t, s) = e B (t, s)B(t), [11,12].e α (σ(τ ), t) G(τ, s) ∆τ x(s) ∆s. (3.10) Proof.Let x(t) be the solution of (3.1) and define q(t) = e B (t 0 , t)x(t).Then Substituting for x ∆ (t) from (3.1) and integrating from t 0 to t, we obtain q(t)−q(t 0 ) = EJQTDE, 2013 No. 5, p. 18 then every solution x of (1.2) satisfies In the next theorem we present sufficient conditions for asymptotic stability.Theorem 3.5 Let L(t, s) be an n×n continuously differentiable matrix function on Ω, such that (a) the assumptions of Theorem 3.3 holds, where L 0 , γ > α, α 0 , are positive real constants.If α ⊖ Mα 0 > 0, then every solution x(t) of (1.1) tends to zero exponentially as t → +∞.
Proof.In view of Theorem 3.1 and the fact that L(t, s) satisfies (a), it is enough to show that every solution of (3.1) tends to zero as t → +∞.From (a) and using (3.10) we obtain the following inequality e α (σ(τ ), 0) G(τ, s) ∆τ x(s) ∆s.
Example 3.7 Let us consider the following Volterra integro-dynamic equation where In the following we have to check that the assumptions of Theorem 3.5 hold for T = R and T = N respectively.
Let T = R. Then we have Here the constants are α = 2 and γ = 3.From (3.17) it follows that Then from (3.18), we obtain that G(t, s) is a positive function, and from which it follows that , then we have that α − Mα 0 > 0. Therefore, since all the assumptions of Theorem 3.5 hold for the system (3.16), it follows that the solution of (3.16) tends to zero exponentially as t → +∞.Now we consider T = N.Then we have Here the constants are α = 2 and γ = 3.From (3.17) it follows that

Boundedness
In the first result of this section, we give sufficient conditions to insure that (1.1) has bounded solutions.Our results are general and apply to (1.1) whether A(t) is stable, identically zero, or completely unstable, and do not require A(t) to be constant nor K(t, s) to be a convolution kernel.Let C(t) and D(t, s) be continuous n×n matrices, t 0 ≤ s ≤ t < ∞.Let s ∈ [t 0 , ∞) and assume that C(t) is an n × n regressive matrix.The unique matrix solution of initial valued problem Then all the solutions of (1.1) are uniformly bounded, and the zero solution of corresponding homogenous equation of (1.1) is uniformly stable with initial condition x(t 0 ) = 0.
Proof.Consider the following functional The derivative of V (t, x(•)) along a solution x(t) = x(t, t 0 , x 0 ) of (1.1) satisfies From Theorem 1.117 of [6], we obtain By using (4.3) and Theorems 1.75, 5.21 of [6] we have the following expression Our next result concerns the boundedness of solutions of (1.1).Theorem 4.5 Let x(t) be a solution of (1.1)  where A(t) = ⊖a(1+a 2 ) a 2 , K(t, s) = e ⊖a (σ(t), s) with a > 2. Assume that F (t) is bounded function.

Acknowledgement
The authors are grateful to the corrections and suggestions of the anonymous referee.

Theorem 4 . 8
If F (t) = 0 in (1.1), E(t) ≤ d on [t 0 , ∞)for some d > 0, , s) ∆s ≤ β, for β sufficiently small, then the zero solution of (1.1) with initial condition x(t 0 ) = 0 is uniformly stable.EJQTDE, 2013 No.5, p. 32 .10) Now, we have to show that P is a contraction on C y 0 [t 0 , t].Multiplying (2.10) by e β (s, t 0 ) we obtain Therefore, by Banach fixed point theorem, P has a unique fixed point in C y 0 [t 0 , t].It follows that, (2.9) has a unique solution on the interval [t 0 , t].By virtue of this definition, Z 1 (t, s) is the unique matrix solution of i (t, t) = e i .