One-dimensional attractor for a dissipative system with a cylindrical phase space

Consider an attractor of a dissipative non-autonomous system with one angle coordinate. We give conditions for this attractor to be homeomorphic to the circle where we find connections with the work of R. A. Smith. Several applications are studied, such as: the forced pendulum, discretizations of the sine-Gordon equation, n’th order equations, among others.


Introduction
Let us consider a system of differential equations where F : R × R N → R N is a continuous function verifying the following periodicity conditions for some positive constant T ∈ R and a non-zero vector R ∈ R N . Moreover we will suppose that the solutions of (1) are uniquely determined by their initial conditions and consequently vary continuously with them. An example of the class of equations we have in mind is the periodically driven pendulum equation with friction where c > 0 and p is a T -periodic continuous function. In this particular case Similarly to the case of the pendulum, we can regard the phase space of equation (1) as a cylinder C where each solution u is identified with the solutions u + kR, k ∈ Z. We assume that there exists a compact set B ⊂ C that intersects the forward orbit starting from any point and moreover every compact set of C enters in B in finite time. This characterizes the dissipative nature of equation (1). Consider the Poincaré map P on the cylinder. Like many other cases of dissipative systems, we can define a maximal invariant set A for the Poincaré map (see [5], [7], [11] p.21). This set is also an attractor for the orbits given by iterates of the Poincaré map. In this paper we study conditions under which A is homeomorphic to T 1 = R/Z. There are already some results in this direction in the literature: in [6], [10] two dimensional systems arising from a periodically driven pendulum with friction were analyzed, and in [9], [12], [13] the case of autonomous coupled systems of differential equations were studied.
The existence of an unidimensional attractor allows a reduction in dimension on the dynamics of this system. Moreover, in this case the Poincaré map becomes an homeomorphism from the circle to itself, which permits to define a rotation number that gives considerable information about the dynamics of the system (see [8], [10], [12], [13]).
We give a geometrical condition for A to be homeomorphic to the circle that unifies some of the results obtained before. This condition is related to the work of R. A. Smith [16].
In section 2 we will formulate the main hypotheses. In section 3 we introduce a class of equations satisfying the dissipativity condition. An equation of pendulum type is studied in section 4. The connections with the work of R. A. Smith will be analyzed in section 5. The remaining paper is devoted to the study of some particular cases of non-autonomous systems, some of which generalizing results in the papers referred above. This work resulted from the studies towards a Ph.D. degree of the author supervised by Professor Rafael Ortega. The reader can find related results in http://ptmat.lmc.fc.ul.pt/ rmartins

General conditions
We denote by x the euclidean norm of a column vector in R N and by x * its transpose, the transpose of a real matrix will be denoted by A * . We define the equivalence relation in R N by The set of equivalence classes will be denoted by C and x will denote the class of x ∈ R N . The space C is a metric space with the distance Given the periodicity conditions in F , we observe that if x is a solution of (1) then x + R is also a solution; we conclude that the Poincaré map is well defined in C. More precisely, if x(t; t 0 , x 0 ) is the solution of (1) satisfying the initial conditions x(t 0 ) = x 0 then the Poincaré map is defined by The following hypothesis clarifies the notion of dissipation for system (1).
There exists a non-empty compact set B ⊂ C such that for every compact set Assuming (H 1 ), the Poincaré map has domain D = C, and there exists an integer k 0 > 0 such that if k ∈ N and k ≥ k 0 then P k 0 (B) ⊂ B. Consider the set that is a non-empty compact and P (B 1 ) ⊂ B 1 . The set is a non-empty compact and P (A) = A. It is clear that if B is another set in the conditions of (H 1 ) and if we use it in the construction above we would recover the same set A. It is also obvious that for every x ∈ C. We introduce the following set of solutions of (1): The next lemma shows that A is the set of initial conditions of solutions in B.
Proof: For each x 0 ∈ A consider x(t) = x(t; 0, x 0 ). Since A is invariant for P we obtain, for each n ∈ Z, Thus we obtain x(t) ∈ B, ∀t ∈ R, and so x ∈ B. On the other hand, if x ∈ B then there exists a compact set B 0 ⊂ C such that x(t) ∈ B 0 , ∀t ∈ R, by (H 1 ) we have necessarily x(t) ∈ B, ∀t ∈ R. Finally P n (x(−nT )) = x(0) ∈ P n (B 1 ), for all n ∈ N; we conclude that x(0) ∈ A.
We want to give conditions under which A is necessarily homeomorphic to the circle. To this end we introduce the following hypothesis: Suppose that for each pair of distinct solutions x 1 and x 2 of (1) bounded on the cylinder we have Assuming (H 2 ), consider the projection Q over F, such that Ker Q = span{R} and the function Observe that hypothesis (H 2 ) essentially translates the injectivity of π /A in terms of solutions of (1). Indeed we have the following Theorem.
Theorem 2.2 If (H 1 ) and (H 2 ) are satisfied then π /A : A → T 1 is an homeomorphism from A onto T 1 .
Proof: Using (H 2 ) and the last Lemma we conclude that π /A is one-to-one. It was proved in [14] that the inclusion i : A → C induces an isomorphism on theČech cohomology i * :Ȟ * (C) →Ȟ * (A). Since π /A is an homeomorphism from A onto its image, we conclude that π /A (A) has a non-trivialČech cohomology, so π /A (A) = T 1 .

Verification of (H 1 )
We aim to apply the results of the last section to an equation like where x ∈ R N , N ≥ 2, and C ∈ M N ×N (R). Suppose that C has an eigenvalue λ 1 = 0 with associated eigenspace spanned by an unitary eigenvector R, and all the other eigenvalues λ i , i = 2, 3, . . . , k (for some k ∈ N) satisfy Re(λ i ) < 0. The function J : R × R N → R N is assumed to be continuous and satisfying for some positive constant T . Finally, we will assume that the solutions exist and are uniquely determined by each set of initial conditions. We will denote by F an (N − 1)−dimensional subspace containing all the generalized eigenspaces associated to λ i , i = 2, . . . , k and such that R N = span{R} ⊕ F. Consider the spectral projection Q over F such that KerQ = span{R}.
Let us denote by Λ = −max{Re(λ i ) : i = 2, . . . , k} the absolute value of the largest real part of the non-null eigenvalues.
Proof: Notice that from (4) and the periodicity of J we conclude that there exist a constant c 3 such that Applying the projection Q to both sides of (2) and observing that Q commutes with C, we obtain Since the conditions of the last Lemma are satisfied with β = −Λ/2, consider an inner product < ·, · > G such that for some constants c 4 , c 5 . For each ρ > 0 we will consider the set

An equation of pendulum type
We consider the equation where the damping coefficient h : R → R is an element in C(R/Z) and is strictly positive, say 0 < min R h(x) = c. The nonlinearity g : R 2 → R is continuous and satisfies the following periodicity conditions for some positive constant T . Finally we will suppose that the solutions exist and are uniquely determined by each set of initial conditions. If H(x) = x 0 h(s)ds is a primitive of h, we can rewrite equation (5) as and x is a solution of (5) iff (x, x + H(x)) is solution of (6). If we denotē h = 1 0 h(s)ds then the equation above can be written as So (6) has the form of (1) with R = (1,h) * . Lemma 3.2 shows that the equation (6) satisfies (H 1 ).
for each (t, x, y) ∈ R 3 , with x = y, then π /A is an homeomorphism from A onto T 1 .
Although α and β are not necessarily continuous, they are measurable and bounded. Moreover, for each t ∈ R such that x 1 (t) = y 1 (t) we have α(t) ≥ c and β(t) < c 2 /4. Observe that for each t ∈ R such that ξ = 0 the function γ = η ξ satisfies the Ricatti equation Since α and β are bounded, there is a constant M > 0 such that γ (t) < 0 whenever γ(t) = M . On the other hand, if γ(t) = c 2 then γ (t) > 0. This shows that the orbits do not leave the shaded cone in figure 1. Supposing that the constant M is such that α(t) < M 2 , for all t ∈ R, we observe that: • If η < cξ and ξ ≥ 0 then ξ < 0.
The estimation obtained in the last Theorem is similar to the estimations obtained in [6], [10] but refers to a more general type of equation. On the other hand the estimation for an equation with this kind of generality is optimal accordingly to [8] where it was proved that for each H > c 2 /4 there exists a nonlinearity g ∈ C ∞ (R/Z), T 1 > 0, and a forcing p ∈ C(R/T 1 Z) such that g < H and the attractor associated to the equation is not homeomorphic to T 1 . An open and interesting problem is to find a similar optimal estimate for some classes of nonlinearities g, in particular to g(x) = sin(x).

Connections with the work of R. A. Smith
In [16], Russell A. Smith looking for sufficient conditions for the existence of periodic solutions for (1) in R N (without the periodicity of F on the second variable) and in particular stable periodic solutions, introduced the following condition: Suppose that there exist λ > 0, > 0 and P a symmetric matrix, with a negative eigenvalue and all the other eigenvalues positive, in such a way that for all x 1 , x 2 ∈ R n and t ∈ R.
We will show that, under our hypothesis of periodicity, (H 3 ) implies (H 2 ). Proof. Let λ 1 < 0 and λ 2 , . . . , λ N > 0 be the eigenvalues of P , counted accordingly to its multiplicity. Let V 1 , V 2 , . . . , V N be an orthogonal basis of eigenvectors of P associated to λ 1 , λ 2 , . . . , λ N respectively. Taking x 1 = R and x 2 = 0 in (H 3 ) we obtain R * P R ≤ − λ R 2 , so we conclude that R ∈ span{V 2 , . . . , V N }. Define F = span{V 2 , . . . , V N } and Q the projection over F such that Ker Q = span{R}. Let x 1 (t), x 2 (t) be two distinct solutions of (1) bounded on the cylinder. Considering a constant C 1 in such a way that Q(x 1 − x 2 ) < C 1 and defining V (x) = x * P x, we will prove that the set is bounded. Indeed, if this is not the case then there exists a sequence in A that can be written as x n = Q(x n ) + α n R with α n an unbounded sequence. On the other hand, denoting by | · | the spectral norm of a matrix, which gives a contradiction.
for all t < 0; so {x 1 (t) − x 2 (t), t < 0} is unbounded and a subset of A which is an absurd.
In the same paper [16] the author also gave conditions for a system of the type of (2) to satisfy condition (H 3 ) and by the last Theorem (H 2 ). From now on consider a system of the type of (2) and satisfying the assumptions assumed on section 3. For each λ ∈]0, Λ[ we have det[C + λI − iwI] = 0, for every w ∈ R, so we can consider the number We will say that J is K−Lipschitz in the second variable if for every (x, y, t) ∈ R 2N +1 . The following result was proved in [15] and [16] in much more general conditions. For the convenience of the reader we will present a proof for this particular seting. Proof: Notice that if we define V (x) = x * P x then (H 3 ) is satisfied iff for every pair x 1 , x 2 , of solutions of (2) we have: Given an arbitrary pair of solutions of (2) x 1 , x 2 , the function X = x 1 − x 2 satisfies the equation so if we consider γ ∈ R such that µ(λ) −1 > γ −1 > K then the real matrix polynomial B(z) = γ 2 (C * + λI + zI)(C + λI − zI) − I is such that B(z) * = B(−z) and B(iω) is positive definite for every ω ∈ R. A result of matrix theory (see [3], Theorem 4) shows that exists a real matrix polynomial h(z) = C 1 + C 2 z such that Consider the symmetric matrix We conclude that (7) is satisfied with = 1 − γ 2 (sup t∈R |M (t)|) 2 > 0.
We could repeat the same argument to show that (H 3 ) is satisfied with the same P , , and J ≡ 0, i.e.
for every x 1 , x 2 ∈ R N . With x 1 = x and x 2 = x + h we obtain So the symmetric matrix of the quadratic form 1/2(P (C + λI) + (C * + λI)P ) is negative definite. From the general inertia Theorem (see [4], p.445 ) we conclude that −P and C + λI have the same number of eigenvalues with positive, null, and negative real part.
Given an equation of the type of (1) we can write it in the form (2) in many different ways and this will in general give distinct conditions for a system to satisfy (H 3 ). As a consequence of the last Theorem, Proposition 5.1, and Theorem 2.2 we obtain: Theorem 5.3 If condition (H 1 ) is satisfied and J is K-Lipschitz in the second variable with K < µ(λ) −1 for some λ ∈]0, Λ[, then π /A is an homeomorphism from A onto T 1 .

Some examples of application
In this section we consider a system of the type of (2) satisfying the hypothesis assumed in section 3 and moreover the eigenvalues of C are λ 1 = 0, λ 2 , . . . , λ N that we will suppose to be real, written in decreasing order, and counted accordingly to its multiplicity. When the matrix C is diagonal the number µ(λ) can be easily computed. Proof: The change of variables y = M x transform equation (2) in where D = diag{0, λ 2 , λ 3 , . . . , λ N }. Notice that the equation above is of the type of (2). In particular G(t, y) = G(t + T, y) = G(t, y +R) In particular We conclude by Theorem 5.2 and Proposition 5.1 that (8) satisfies (H 2 ) whenever K < −λ 2 /2. We claim that if (8) satisfies (H 2 ) then (2) also satisfies (H 2 ). Indeed, ifF is an (n − 1)−dimensional subspace in the conditions of (H 2 ) and given two solutions x 1 , x 2 of (2) bounded in C then y 1 = M x 1 and y 2 = M x 2 are bounded in the cylinderC correspondent to equation (8), thus y 1 (0) − y 2 (0) ∈F and then x 1 (0) − x 2 (0) ∈ M −1F . So system (2) satisfies condition (H 2 ) with F = M −1F . The result follows from Theorem 2.2.

Example 1
When the matrix C is symmetric we can easily compute the Lipschitz constant of G. Consider for example the particular case of equation (2) with C an N × N , N ≥ 3 matrix with the form where ν > 0. The autonomous case J(t, x) = (w 1 −sin(x 1 ), . . . , w N −sin(x N )) was studied in [13], where the existence of an unidimensional attractor was proved for every ν > 0. The matrix C is symmetric and its eigenvalues are The non-autonomous case is treated in the next Theorem.
Theorem 6.2 If condition (H 1 ) is satisfied and J is K−Lipschitz on the second variable with K < 2ν sin 2 π 2N , then π /A is an homeomorphism from A onto T 1 .
Proof: Let M be an orthogonal matrix such that M CM −1 is diagonal and have on its diagonal entries the eigenvalues of C. Obviously G(t, y) = M J(t, M −1 y) is also K−Lipschitz on the second variable, so the result follows by the last Theorem.

Example 2
For each N ≥ 2 consider the system in R N where γ is a positive constant called the friction coefficient and A ∈ M N ×N (R) is a symmetric matrix with eigenvalues α 1 = 0, and α i > 0, i = 2, . . . , N (written in increasing order and counted according to its multiplicity). Let us suppose that KerA = span{η}. Moreover, S : R × R N → R N is continuous and such that S(t, u + η) = S(t + T, u) = S(t, u) for some T > 0. As usual we will assume that the solutions exists and are unique for each set of initial conditions and vary continuously with them. This kind of equation appear for example as a model of coupled forced pendula or as a discretization of the sine-Gordon equation. System (10) can be written in the form of (2) with x = (u, v) * ∈ R 2N , Notice that J(t, x + R) = J(t, x) with R = (η, 0) * ∈ R 2N and R ∈ KerC.

For each box
. . , N (we will suppose from now on that α N < γ 2 /4 in order that the eigenvalues of C are real) the matrices We conclude that is such that P 4 P 3 P 2 CP −1 2 P −1 3 P −1 4 is diagonal and have on its diagonal entries the eigenvalues of C. Notice that if α N < γ 2 /4 the non-null eigenvalues of C are all real and negative. Moreover, the largest non-null eigenvalue of C is −γ/2 + γ 2 − 4α 2 /2. Our next goal will be to compute the Lipschitz constant of G(t, y) = P 4 P 3 P 2 J(t, P −1 2 P −1 3 P −1 4 y).
Lemma 6.3 Suppose that α N < γ 2 /4. If y = (0, v) ∈ R 2N then P 4 P 3 P 2 y = 2 v and for each y = (u, v) ∈ R 2N we have Proof: The first equality is a straightforward consequence of the form of (11). Given y = (u, v) ∈ R 2N then P 4 P 3 P 2 y = P 4 (ηu, ηv, η 2 u, η 2 v, . . . , η N u, We can finally prove the central Theorem of this example. Theorem 6.4 Suppose that (H 1 ) is satisfied, α N < γ 2 /4 and S is K-Lipschitz on the second variable. If then π /A is an homeomorphism from A onto T 1 .
Proof: By Theorem 6.1 we only need to show that the Lipschitz constant of G(t, y) on the second variable is less than γ/4 − γ 2 − 4α 2 /4. For each y = P 4 P 3 P 2 (u, v), y = P 4 P 3 P 2 (u , v ) ∈ R 2N and t ∈ R, we obtain by the last Lemmas All the preceding argument could be repeated with minor modifications to include the case N = 1. In this way we could recover the results of section 4 for the particular case h ≡ c.
The particular case where A is of the type of (9) and S(t, u) = (sin(u 1 ) − w 1 , . . . , sin(u N ) − w N ) with (w 1 , . . . , w N ) ∈ R N was studied in [12], where the existence of an attractor homeomorphic to the circle was proved using different methods.

Example 3
Consider the ordinary differential equation where g : R N +1 → R is a continuous function. Writing y = (x, x , . . . , x N −1 ) * , we will suppose that g(t + T, y) = g(t, y) = g(t, y + (1, 0, . . . , 0) * ), for some T > 0. As usual we will suppose that g is such that there is existence and uniqueness of solution for each set of initial conditions We can write the equation above as a system of the form of (2) where The matrix C has the eigenvalue λ 1 = 0 with associate eigenvector (1, 0, . . . , 0) * . We will suppose that all the roots of the polynomial are real, negative, and with multiplicity one. In other words, the non-null eigenvalues of C λ 2 , λ 3 , . . . , λ N are real, negative, and distinct (we suppose that they are written in decreasing order). Each of these eigenvalues has (1, λ i , λ 2 i , . . . , λ N −1 i ) * as an eigenvector, so the matrix The matrix P is the so-called Vandermonde's matrix and is well known in the polynomial interpolation theory. Indeed, the equation P * x = y is equivalent to the existence of a polynomial of order N − 1 satisfying L(0) = y 1 , L(λ 2 ) = y 2 , . . . , L(λ N ) = y N .
L is called the Lagrange interpolation polynomial and its uniqueness depends of the determinant Since we are assuming that the eigenvalues λ i 's are all different, the last determinant is different from zero.
In spite of the fact that the matrix P has an inverse, it is not easy to compute it. Nevertheless the polynomial L can be written as L(t) = y 1 L 1 (t) + y 2 L 2 (t) + . . . + y N L N (t) where for each i = 1, . . . , N , The following vector will play a fundamental role in what follows Lemma 6.5 Given x = (0, 0, . . . , 0, x N ) * ∈ R N , we have P −1 x = x N Ω.
Proof: Let x = (0, 0, . . . , 0, x N ) * ∈ R N and y ∈ R N , then (P −1 x) * y = x * (P −1 ) * y is the product of x N by the coefficient of the (N − 1)'th term of L(t), that is Since y is arbitrary we obtain the result.
The following Theorem gives a sufficient condition for the existence of an attractor homeomorphic to the circle. Theorem 6.6 Suppose that (H 1 ) is satisfied and g is K−Lipschitz on y. If K < −λ 2 2 Ω |P | .
Then π /A is an homeomorphism from A onto T 1 .
We remark that in the special case where g only depend on x, R. A. Smith in the last section of [16] gave a circle criterion for equation (12) to satisfy (H 3 ) that seems to give sharper estimates than the last Theorem.
As an example of application we can consider the equation x + c 1 x + sin(x) = p(t) y + c 2 y + y = x , that is a model for a spring with friction driven by the motion of a pendulum with friction that is forced by a periodic force p(t). The function y satisfies the fourth order equation y (4) + (c 1 + c 2 )y + (1 + c 1 c 2 )y + c 1 y = − sin(y + c 2 y + y) + p(t), that is of the type of (12). If c 2 > 2, c 1 > 0 then the roots of λ 3 + (c 1 + c 2 )λ 2 + (1 + c 1 c 2 )λ + c 1 are the negative real numbers −c 1 , and −c 2 ± √ c 2 2 −4 2 ; so the last Theorem can be applied.