On a class of differential-algebraic equations with infinite delay

We study the set of $T$-periodic solutions of a class of $T$-periodically perturbed Differential-Algebraic Equations, allowing the perturbation to contain a distributed and possibly infinite delay. Under suitable assumptions, the perturbed equations are equivalent to Retarded Functional (Ordinary) Differential Equations on a manifold. Our study is based on known results about the latter class of equations.


Introduction
This paper is devoted to the study of some properties of the set of harmonic solutions to retarded functional periodic perturbations of differential-algebraic equations (DAEs) of a particular type. The results we obtain are mainly related, on one hand, with those of [7] concerning the method used to deal with distributed and possibly infinite delay and, on the other hand, with [5,14] as regards the treatment of DAEs.
Let g : R k × R s → R s and f : R k × R s → R k be given. Assume f continuous and g ∈ C ∞ (R k × R s , R s ) has the property that ∂ 2 g(p, q), the partial derivative of g with respect to the second variable, is invertible for any (p, q) ∈ R k × R s ∼ = R n . We consider the following DAE in semi-explicit form: (1.1) ẋ = f (x, y), g(x, y) = 0, and perturb it as follows: (1.2) ẋ = f (x, y) + λh(t, x t , y t ), g(x, y) = 0, where h : R × BU ((−∞, 0], R k × R s ) → R k is continuous and T -periodic, T > 0 given, in the first variable. The resulting equation (1.2) is an example of Retarded Functional Differential-Algebraic Equation (RFDAE). For λ ≥ 0, we are interested in the T -periodic solutions of (1.2) where, given t ∈ R, we adopt the notation x t , y t : (−∞, 0] → R k × R s for the maps x t : θ → x t (θ) := x(t + θ) and θ → y t (θ) := y(t + θ). Since ∂ 2 g(p, q) is invertible for any (p, q) ∈ R k × R s , 0 ∈ R s is a regular value of g, and so M := g −1 (0) is a C ∞ manifold and a closed subset of R k × R s ∼ = R n . The latter fact is important as we wish to use the results of [7] that depend in an essential manner on M being closed.
The Implicit Function Theorem imply that M can be locally represented as a graph of some map from an open subset of R k to R s . Thus, in principle, equation (1.2) can be locally decoupled. Globally, however, this might be not the case or it could not be convenient to do so (see, e.g. [5,14]).
As we shall see, proceeding as in [11, §4.5] (compare also [14]) when ∂ 2 g(p, q) is invertible for all (p, q) ∈ R k × R s , equation (1.2) is equivalent to a retarded functional differential equation (RFDE) on M of the form considered in [7]. Some related ideas, in the context of constrained mechanical systems, can be found in [13]. In order to obtain information about the set of T -periodic solutions of (1.2), we will use the techniques of [7] combined with a result of [14] about the degree of the tangent vector field on M induced by the unperturbed equation (1.1). Our aim will be to show the existence of a noncompact 'branch' of T -periodic solutions of (1.2) emanating from the set of the constants solutions of (1.1). Namely, denote by C T (R k × R s ) the Banach space of the T -periodic, R k × R s -valued functions, we will prove the existence of a noncompact connected set of triples (λ, ξ) ∈ [0, ∞) × C T (R k × R s ), with ξ a T -periodic solution to (1.2), whose closure meets the set of the constant solutions of (1.1).
In the last section of this paper, in order to illustrate our results we provide some applications to a particular class of implicit retarded functional differential equations.
We first discuss the notion of solution to a retarded functional DAE of the form (1.2). Let f : R k × R s → R k and g : R k × R s → R s be given maps with f continuous and g ∈ C ∞ (R k × R s , R s ) has the property that ∂ 2 g(p, q) is invertible for any (p, q) ∈ R k × R s ∼ = R n . Given T > 0, consider also continuous and T -periodic in the first variable map h : , for a given λ ≥ 0 consists of a pair of functions x ∈ AC(I, R k ) and y ∈ C(I, R s ), I ⊆ R an interval with inf I = −∞, such that (2.1a) g x(t), y(t) = 0, for all t ∈ I, and, eventually. In the sense that there exists a subinterval J ⊆ I with sup J = sup I on which (2.1b) holds. Observe that, by the Implicit Function Theorem, y has the same regularity as x. Therefore, a solution of (1.2) is an absolutely continuous function ζ := (x, y) which actually is an eventually C 1 function, i.e., ζ ∈ C 1 (J, R k × R s ).
Let us now associate tangent vector fields on M = g −1 (0) to f and h. Recall that a continuous map w : M → R n with the property that for any p ∈ M , w(p) belongs to the tangent space T p M to M at p is called a tangent vector field on M . Similarly, a time-dependent functional (tangent vector) field on M is a map Consider the maps Ψ : M → R k × R s and Υ : R × BU (−∞, 0], M → R k × R s defined as follows: Using the fact that, given a point (p, q) ∈ M , T (p,q) M is the kernel Ker d (p,q) g of the differential d (p,q) g of g at (p, q), it can be easily proved that Ψ is tangent to M in the sense that Ψ(p, q) belongs to T (p,q) M for all (p, q) ∈ M (compare, e.g. [14]). Similarly, we have that Υ is tangent to M , in the sense that Υ(t, ϕ, ψ) ∈ T (ϕ(0),ψ(0)) M , for all (t, ϕ, ψ) ∈ R × BU (−∞, 0], M . In other words, we see that Ψ is a tangent vector field, whereas Υ is a time-dependent functional field on M . Since h is assumed T -periodic in the first variable, so is Υ. Notice that, for any is a functional tangent vector field as well. We claim that (1.2) is equivalent to the following RFDAE on M , which keep implicitly account of the algebraic condition g(x, y) = 0: where we have used the compact notation ζ t = (x t , y t ), in the sense that ζ = (x, y) is a solution of (2.3) in an interval I ⊆ R if and only if so is (x, y) for (1.2). To verify the claim, let ζ = (x, y) be a solution of (1.2), defined on I ⊆ R. Let J ⊆ I be a subinterval where (2.1) holds. By differentiation of the algebraic equation g x(t), y(t) = 0, one gets when t ∈ J. Hence, the solutions of (1.2) correspond to those of (2.3). The converse correspondence is more straightforward and follows from the fact that a solution ζ = (x, y) of (2.3) defined on an interval I with inf I = −∞ satisfies identically x(t), y(t) ∈ M , which implies (2.1), and eventually fulfillṡ We now introduce the important technical assumption (K) below on the function h. This hypothesis implies a similar property, called condition (H) (discussed e.g. in [2]), for the induced vector Υ on M defined in (2.2b) that plays a central role in [7]. This fact allows us to apply the methods of [7] to our situation.
Throughout this paper, we shall suppose that f is locally Lipschitz and that h satisfies the following assumption (K): Definition 2.1. We say that K : R×BU (−∞, 0], R n → R p satisfies (K) if, given any compact subset C of R × BU (−∞, 0], R n , there exists ℓ ≥ 0 such that for all (t, ϕ) , (t, ψ) ∈ C. Here | · | n and | · | p represent the Euclidean norm in R n and R p , respectively. Furthermore, we say that condition (K) holds locally in R × BU (−∞, 0], R n if for any (τ, η) ∈ R × BU (−∞, 0], R n there exists a neighborhood of (τ, η) in which (K) holds.
One could show that if (K) is satisfied locally, then it is also satisfied globally. However, the local condition is easier to check. It holds, for instance, when K is C 1 or, more generally, locally Lipschitz in the second variable.
The assumption that h satisfies (K) means that for any compact subset C of R × BU (−∞, 0], R k × R s , there exists a constant ℓ ≥ 0 such that Here | · | k and | · | s represent the Euclidean norm in R k and R s , respectively. Observe that if f : R k × R s → R k is a locally Lipschitz tangent vector field, and h is a functional field satisfying (K), then for any λ ∈ [0, +∞) the map of R × BU (−∞, 0], R k × R s ) in R k , given by (t, ϕ, ψ) → f ϕ(0), ψ(0) + λh(t, ϕ, ψ), verifies (K) as well.
If Ψ and Υ are the functional fields on M defined in (2.2), it is easy to see that verifies the condition (H) discussed in [2,7].
It can be proved (see e.g. [2]) that if a functional field on M satisfies (H) locally, then any associated initial value problem admits a unique solution. This shows, given the equivalence of (1.2) and (2.3), that if f and h satisfy (K) then any initial value problem associated to (1.2) has unique initial solution.
3. The degree of the tangent vector field Ψ In this section we introduce some basic notions about the degree of tangent vector fields on manifolds. Recall that if w : M → R n is a tangent vector field on the differentiable manifold M ⊆ R n which is (Fréchet) differentiable at p ∈ M and w(p) = 0, then the differential d p w : T p M → R n maps T p M into itself (see e.g. [12]), so that, the determinant det d p w of d p w is defined. In the case when p is a nondegenerate zero (i.e. d p w : T p M → R n is injective), p is an isolated zero and det d p w = 0.
Let M ⊆ R n be a boundaryless differentiable manifold, and let w be a tangent vector field on M . Let W be an open subset of M in which we assume w admissible for the degree, that is we suppose the set w −1 (0) ∩ W is compact. Then, it is possible to associate to the pair (w, W ) an integer, deg(w, W ), called the degree (or characteristic) of the vector field w in W (see e.g. [6,10]), which, roughly speaking, counts (algebraically) the zeros of w in W in the sense that when the zeros of w are all non-degenerate, then the set w −1 (0) ∩ W is finite and The concept of degree of a tangent vector field is related to the classical one of Brouwer degree (whence its name), but the former notion diverge from the latter when dealing with manifolds. In particular, the former does not need the orientation of the underlying manifolds. However, when M = R n , the degree of a vector field deg(w, W ) is essentially the well known Brouwer degree deg B (w, W, 0) of w on W with respect to 0 (recall that in Euclidean spaces vector fields can be regarded as maps). For the main properties of the degree we refer e.g. to [6,10,12].
Proof. Follows immediately from Theorem 4.1 in [14] and the excision property.

Connected sets of T -periodic solutions
This section is concerned with the set of periodic solutions to (1.2). As in Section 2 we are given maps f : (1) f is locally Lipschitz; (2) g is C ∞ and such that det ∂ 2 g(p, q) = 0 for all (p, q) ∈ R k × R s ; (3) h satisfies (K) and, given T > 0, is T -periodic with respect to its first variable.
Denote by C T (R k × R s ) the Banach space of all the continuous T -periodic functions assuming values in R k × R s with the usual supremum norm. We say that Here, as well as in what follows, the elements of C T (R k × R s ) will be written as pairs whenever convenient. In this way, T -periodic pairs actually will be often written as triples. Moreover, given (p, It can be easily verified that (p, q) is a constant solution of (1.2) for λ = 0 if and only if F (p, q) = (0, 0). Thus, with the above notation, the set of trivial T -periodic pairs can be written as The following convention is very handy. Given subsets Ω and X of [0, ∞)×C T (R k × R s ) and of R k × R s , respectively, with X ∩ Ω we denote the set of points of X that, regarded as constant functions, lie in Ω. Namely, The next result provides an insight into the topological structure of the set of T -periodic solutions of (1.2).
Theorem 4.1. Let f , h and g be as above. Let also F : Assume that deg F, Ω ∩ (R k × R s ) is well-defined and nonzero. Then, the set of nontrivial T -periodic pairs of (1.2), admits a connected subset whose closure in Ω is noncompact and meets the set of trivial T -periodic pairs in Ω i.e. the set (0; p, q) ∈ Ω : F (p, q) = (0, 0) . In particular, the set of T -periodic pairs for (1.2) contains a connected component that meets (0; p, q) ∈ Ω : F (p, q) = (0, 0) and whose intersection with Ω is not compact.
Clearly, each (λ; x, y) ∈ Λ is a nontrivial T -periodic pair of (1.2). Since M = g −1 (0) is closed in R k × R s , it is not difficult to prove that any set that is closed in O is so in Ω and vice versa. Thus, Λ coincides with the closure of Λ in Ω. The first part of the assertion follows.
Let us prove the last part of the assertion. Consider the connected component Γ of X that contains the connected set Λ. We shall now show that Γ has the required properties. Clearly, Γ meets the set (0, p) ∈ Ω : g(p) = 0 because the closure of Λ in Ω does. Moreover, Γ ∩ Ω cannot be compact, since it contains the (noncompact) closure of Λ in Ω.

Remark 4.2.
Let Ω be as in Theorem 3.1, and assume that Γ is a connected component of T -periodic pairs of (1.2) that meets {(0, p, q) ∈ Ω : F (p, q) = (0, 0)} and whose intersection with Ω is not compact. Ascoli's Theorem implies that any bounded set of T -periodic pairs is relatively compact. Then, the closed set Γ cannot be both bounded and contained in Ω. In particular, if Ω is bounded, then Γ necessarily meets the boundary of Ω.
The following corollary ensures the existence of a Rabinowitz-type branch of T -periodic pairs.
and is either unbounded or meets Proof. Consider the open subset Ω of [0, +∞) × C T (R k × R s ) given by Clearly, we have Ω ∩ R k × R s = U . Hence deg(F, Ω ∩ R k × R s ) = 0. Theorem 3.1 implies the existence of a connected component Γ of T -periodic pairs of (1.2) that meets {(0, p, q) ∈ Ω : F (p, q) = 0} and whose intersection with Ω is not compact. Because of Remark 4.2, if Γ is bounded, then it meets the boundary of Ω which is given by And the assertion is proved. is sometimes used as a model for a population x with birth and mortality rate αx and βx 2 , respectively. Consider a generalization of (4.2) where the mortality rate y is related to the population by the implicit relation g(x, y) = 0. This generalized model is expressed by the following DAE: If we allow the population's fertility to undergo periodic oscillations -say λh(t, x t ) with λ ≥ 0-depending possibly on the history of the population, the above model can be modified into the following RFDAE: g(x, y) = 0.
As to the perturbation h(t, x t ) one could look into examples inspired to models describing the dynamics of animals populations (see, e.g. [3,4]) in which the delay is distributed on time. For instance, one could take h(t, x t ) = sin(t)

An application
This technical section is primarily intended as an illustration of our main result Theorem 4.1; for this reason we will not pursue maximal generality but restrict ourselves to simple situations. Below, we consider retarded periodic perturbations of a particular class of implicit functional differential equations. Namely, we study equations of the following form: where E : R n → R n is a linear endomorphism of R n , F : R n → R n and H : R × BU (−∞, 0], R n → R n are continuous maps with F locally Lipschitz and H verifies condition (K). We will show how, in some circumstances, (5.1) can be transformed into RFDAEs of type (1.2) by the means of relatively simple linear transformations. We will apply the results of the previous section to the resulting RFDAEs. A first example of the above mentioned transformation is considered in the following remark: Remark 5.1. Consider equation (5.1) and assume that there exists a decomposition of R n as R r × R n−r such that E and H can be represented as follows: , with E 11 ∈ R r×r invertible and E 12 ∈ R r×(n−r) , (5.2a) In R n ≃ R r × R n−r put x = (ξ, η) and let J E : R r × R n−r → R r × R n−r be the linear transformation represented by the following block matrix: Let (x, y) = J −1 E (ξ, η), and let F 1 (ξ, η) and F 2 (ξ, η) denote the projection of F(ξ, η) onto the first and second factor, respectively, of R r × R n−r . Then, in the new variables x and y equation (5.1) becomes, with a slight abuse of notation, or, equivalently, , it is not difficult to prove that H 1 satisfies (K) as well.
Example 5.2. Consider the following DAE in R 3 : , which can be written as the implicit ODE below and put (x 1 , x 2 , y) = J −1 E (ξ 1 , ξ 2 , ξ 3 ). One has that F J E (x 1 , x 2 , y) = x 1 − x 2 + y, −x 1 + (x 1 − x 2 + y) 2 + y, y 3 + y + x 1 , As in Remark 5.1, for ξ = (ξ 1 , ξ 2 , ξ 3 ), let F 1 (ξ) and F 2 (ξ) denote the projection of F(ξ) onto the first and second factor, respectively, of R 2 × R. Put also F i (x, y) = F i J E (x, y) , i = 1, 2, where x = (x 1 , x 2 ). Proceeding as in Remark 5.1, we transform Equation (5.4) into that can be written more explicitly as follows: Theorem 4.1 combined with the above Remark 5.1, yields Proposition 5.3 below concerning the set of T -periodic solutions of (5.1). We use here the convention on the subsets of [0, ∞) × C T (R r × R n−r ) introduced in section 4. We also need to introduce some further notation.
A pair (λ, x) ∈ [0, ∞) × C T (R n ) is a T -periodic pair for (5.1) if x is a solution of (5.1) corresponding to λ. A T -periodic pair (0, x) for (5.1) is trivial if x is constant. Proposition 5.3. Consider Equation (5.1) where E : R n → R n is linear, F : R n → R n and H : R × BU (−∞, 0], R n → R n are continuous maps such that F is locally Lipschitz and H verifies condition (K) and is T -periodic in the first variable. Assume, as in Remark 5.1, that there exists a decomposition of R n ≃ R r × R n−r such that E and H can be represented as in (5.2). Relatively to this decomposition suppose that ∂ 2 F 2 (ξ, η) is invertible for all (ξ, η) ∈ R r × R n−r .
Let Ω be an open subset of [0, ∞) × C T (R n ) and suppose that deg(F, Ω ∩ R n ) is well-defined and nonzero. Then, there exists a connected subset Γ of nontrivial T -periodic pairs for (5.1) whose closure in Ω is noncompact and meets the set (0, p) ∈ Ω : F(p) = 0 .
Proof. Let J E be the linear transformation introduced in Remark 5.1, and consider the map Observe that since J E is invertible, J E is continuous and invertible, with J E −1 With the convention on the subsets of [0, ∞)× C T (R n ) introduced in section 4, we have According to Remark 5.1, under our assumptions Equation (5.1) is equivalent to the RFDAE (5.3). We now show that Theorem 4.1 can be applied to Equation The property of invariance under diffeormorphism of the degree (also called topological invariance, see e.g. [6]) yields (Ω) ∩ R n .  Here, by 0 we mean the function constantly equal to 0 ∈ R 3 .
Observe that Proposition 5.3 seems to impose some rather severe constraint on the form of E and H in Equation (5.1). In fact, with the help of some linear transformation, one can sometimes lift these restrictions. This is the case when the perturbing term H has a particular 'separated variables' form that agrees with E in the sense of Equation (5.9) below. Namely, we consider the following equation: where C : R → R n×n , S : BU (−∞, 0], R n → R n are continuous maps, E is a (constant) n × n matrix, and F is as in Equation (5.1). We also assume that C and E agree in the following sense: As a consequence of the well-known Rouché-Capelli Theorem we get Thus, we have that (5.10) rank E = rank C(t) is constant and greater than 0 for all t ∈ R, Ours is a singular value decomposition (see, e.g., [9]) argument based on the following technical result from linear algebra: Lemma 5.5. Let E ∈ R n×n and C ∈ C R, R n×n be as in (5.9). Put r = rank E, and let P, Q ∈ R n×n be orthogonal matrices that realize a singular value decomposition for E. Then it follows that with C 11 ∈ C R, R r×r invertible for any t ∈ R.
We will provide a proof for Lemma 5.5 for the sake of completeness but, before doing that, we show how it can be used to convert equation (5.8) into (5.1). We begin with an example. and where, for any t ∈ R, c(t) = sin(t) + 2 and d(t) = cos(t) + 3. It can be easily verified that, with this choice of E and C, (5.9) is satisfied. Here, clearly, r = 3 and n = 4. Consider the following orthogonal matrices

Preprint ON A CLASS OF DIFFERENTIAL-ALGEBRAIC EQUATIONS WITH INFINITE DELAY 11
Let us consider the orthogonal change of coordinates x = Q T x. Multiplying (5.8) by P on the left we get the following equivalent equation: Set E = P EQ T , F(x) = P F(Q T x) for all x ∈ R 4 , and finally, put H(t, ϕ) = P C(t)Q T QS(Q T ϕ) for all (t, ϕ) ∈ R×BU (−∞, 0], R 4 . Thus (5.12) can be rewritten as It is easily verified that the so defined E and H satisfy (5.2), so that (5.12) is precisely of the form (5.1). In other words, we have transformed (5.8), for E and C as above, into an equation of the form considered in Proposition 5.3.
Let us now consider Equation (5.8) more in general. Let r > 0 be the rank of E, and assume that (5.9) is satisfied. Then Lemma 5.5 yields orthogonal matrices P and Q in R n×n such that, for every t ∈ R, P C(t)Q T is as in (5.11) and realize a singular value decomposition of E. That is where E 11 ∈ R r×r is a diagonal matrix with positive diagonal elements. As in the above example, consider the orthogonal change of coordinates x = Q T x in Equation (5.8) and multiply by P on the left. We get the equivalent equation where E, F and H are given by E = P EQ T , F(x) = P F(Q T x) for all x ∈ R n , and H(t, ϕ) = P C(t)Q T QS(Q T ϕ) for all (t, ϕ) ∈ R × BU (−∞, 0], R n . A straightforward computation shows that E and H satisfy conditions (5.2). Therefore, (5.14) is of the form considered in Proposition 5.3 from which we deduce the following consequence: Corollary 5.7. Consider Equation (5.8) where the maps C : R → R n×n and S : BU (−∞, 0], R n → R n are continuous, E is a (constant) n × n matrix, and F is as in Equation (5.1) such that F is locally Lipschitz and S verifies condition (K).
Suppose also that C and E satisfy (5.9) and that C is T -periodic. If r > 0 is the rank of E consider the decomposition R n ≃ R r × R n−r and assume that, relatively to this decomposition, ∂ 2 F 2 (ξ, η) is invertible for all x = (ξ, η) ∈ R r × R n−r Let Ω be an open subset of [0, ∞) × C T (R n ) and suppose that deg(F, Ω ∩ R n ) is well-defined and nonzero. Then, there exists a connected subset Γ of nontrivial T -periodic pairs for (5.8) whose closure in Ω is noncompact and meets the set (0, p) ∈ Ω : F(p) = 0 .
The assertion follows immediately applying Proposition 5.3 to Equation (5.14).
We conclude this section with a proof of our technical Lemma.
Proof of Lemma 5.5. Since the dimension of ker C(t) is constantly equal to r > 0, by inspection of the proof of Theorem 3.9 of [11, Chapter 3, §1] we get the existence of orthogonal matrix-valued functions U, V ∈ C R, R n×n and C r ∈ C(R, R r×r ) such that, for all t ∈ R, det C r (t) = 0 and (5.15) U T (t)C(t)V (t) = C r (t) 0 0 0 .
Let U r , V r ∈ C R, R n×r and U 0 , V 0 ∈ C(R, R n×(n−r) ) be matrix-valued functions formed, respectively, by the first r and n − r columns of U and V . A simple argument involving Equation (5.15) shows that the columns of V 0 (t), t ∈ R, are in ker C(t) and, since there are n − r = dim ker C(t) of them, we have that the columns of V 0 (t) actually span ker C(t). In fact, the orthogonality of the matrix V (t), t ∈ R, imply that the columns of V 0 (t) form an orthogonal basis of ker C(t). A similar argument proves that the columns of U 0 (t) are vectors of R n that constitute an orthogonal basis of ker C(t) T for all t ∈ R. Observe also that since im C(t) is orthogonal to ker C(t) T for all t ∈ R, it follows that the columns of U r (t) are an orthogonal basis for im C(t) and that those of V r (t) so are for im C(t) T .
Similarly, let P r , Q r and P 0 , Q 0 be the matrices formed taking the first r and n − r columns of P and Q, respectively. Since P and Q realize a singular value decomposition of E, one can check that the columns of P r , Q r , P 0 and Q 0 span im E, im E T , ker E T , and ker E, respectively.
We claim that P T 0 U r (t) is constantly the null matrix. To prove this, it is enough to show that for all t ∈ R, the columns of P 0 are orthogonal to those of U r (t). Let v and u(t), t ∈ R, be any column of P 0 and of U r (t), respectively. Since for all t ∈ R the columns of U r (t) are in im C(t), there is a vector w(t) ∈ R n with the property that u(t) = C(t)w(t), and v, u(t) = v, C(t)w(t) = C(t) T v, w(t) = 0, t ∈ R, because v ∈ ker E T = ker C(t) T for all t ∈ R. This proves the claim. A similar argument shows that also P T r U 0 (t), V T r (t)Q 0 , and V T 0 (t)Q r are also identically zero. Since for all t ∈ R P T Q = P T r U r (t) 0 0 P T 0 U 0 (t) and V (t) T Q = V r (t) T Q r 0 0 V 0 (t) T Q 0 are nonsingular, we deduce in particular that so are P T r U r (t) and V r (t) T Q r . Let us compute the matrix product P T C(t)Q for all t ∈ R. We omit here, for the sake of simplicity, the explicit dependence on t.
Which proves the assertion because P T r U r , C r , and V T r Q r are nonsingular.