On the number of limit cycles for perturbed pendulum equations

We consider perturbed pendulum-like equations on the cylinder of the form $ \ddot x+\sin(x)= \varepsilon \sum_{s=0}^{m}{Q_{n,s} (x)\, \dot x^{s}}$ where $Q_{n,s}$ are trigonometric polynomials of degree $n$, and study the number of limit cycles that bifurcate from the periodic orbits of the unperturbed case $\varepsilon=0$ in terms of $m$ and $n$. Our first result gives upper bounds on the number of zeros of its associated first order Melnikov function, in both the oscillatory and the rotary regions. These upper bounds are obtained expressing the corresponding Abelian integrals in terms of polynomials and the complete elliptic functions of first and second kind. Some further results give sharp bounds on the number of zeros of these integrals by identifying subfamilies which are shown to be Chebyshev systems.


Introduction
The so-called Hilbert's 16 th Problem was proposed by David Hilbert at the Paris conference of the International Congress of Mathematicians in 1900. The problem is to determine the upper bound for the number of limit cycles in two-dimensional polynomial vector fields of degree d, and to investigate their relative positions, see [11,15]. There is also a weaker version, the so-called infinitesimal or tangential Hilbert's 16 th Problem, proposed by Arnold, which can be stated in the following way: let ω be a real 1-form with polynomial coefficients of degree at most d, and consider a polynomial H of degree d + 1. A closed connected component of a level curve of H = h, denoted by γ h , is called an oval of H. These ovals form continuous families. The infinitesimal Hilbert's 16 th Problem then asks for an upper bound V (d) of the number of real zeros of the Abelian integral The bound should be uniform with respect to the polynomial H, the family of ovals {γ h } and the form ω, i.e. it should only depend on the degree d, cf. [11,10]. The existence of V (d) goes back to the works of Khovanskii and Varchenko ([14,21]). Recently an explicit (non realistic) bound for V (d) has been given in [2] by Binyamini, Novikov and Yakovenko. There is a beautiful relationship between limit cycles and zeros of Abelian integrals: Consider a small deformation of a Hamiltonian vector field where X H = −H y ∂ x + H x ∂ y , Y = P ∂ x + Q∂ y and ε > 0 is a small parameter. Denote by d(h, ε) the displacement function of the Poincaré map of X ε and consider its power series expansion in ε. The coefficients in this expansion are called Melnikov functions M k (h). Therefore, the limit cycles of the vector field correspond to isolated zeros of the first non-vanishing Melnikov function. A closed expression of the first Melnikov function M 1 (h) = I(h) was obtained by Pontryagin which is given by the Abelian integral Hence the number of isolated zeros of I(h), counting multiplicity, provide an upper bound for the number of ovals of H that generate limit cycles of X ε for ε close to zero. The coefficients of P and Q are considered as parameters, and so I(h) splits into a linear combination I(h) = α 0 I 0 (h) + · · ·+ α ℓ I ℓ (h), for some ℓ ∈ N, where the coefficients α k depend on initial parameters and I k (h) are Abelian integrals with some ω k = x i k y j k dx. Therefore, the problem of finding the maximum number of isolated zeros of I(h) is equivalent to finding an upper bound for the number of isolated zeros of any function belonging to the vector space generated by I j (h), j = 0, . . . ℓ. This equivalent problem becomes easier when the basis of this vector space is a Chebyshev system, see Section 3 for details.
We are interested in these considerations because we want to analyze in terms of m and n the number of periodic orbits for perturbed pendulum-like equations of the form where for each s the functions Q n,s are trigonometric polynomials of degree at most n and ε > 0 is a small parameter. The planar system associated to (1) can be viewed as a trigonometric perturbation of the Hamiltonian system (2) ẋ = y, y = − sin(x), with total energy (3) H(x, y) = y 2 2 + 1 − cos(x), which in fact can be considered on the cylinder [−π, π] ×R. In other words, we are interested in quantifying in terms of m and n the number of limit cycles that bifurcate from the closed ovals of the unperturbed pendulum equation x + sin x = 0. This problem can be seen as an extension of the infinitesimal Hilbert's 16 th Problem to the trigonometrical world. Notice that for h ∈ (0, 2) the levels γ h = {(x, y); H(x, y) = h} are ovals surrounding the origin, while for h ∈ (2, ∞) the corresponding levels have two connected components which are again ovals, one of them contained in the region y > 0 denoted by γ + h , and the other one contained in the region y < 0 denoted by γ − h . The region corresponding to energies h ∈ (0, 2) is usually called oscillatory region and we will denote it by R 0 . The regions with energies h ∈ (2, ∞) and ±y > 0 will be denoted by R ± and both together form the so-called rotary region.
The analysis of equations of this form is also motivated by a number of problems resulting from pendulum-like equations appearing in the literature. Examples include the system ẋ = y, y = α + βy + γy 2 + δ cos(2πx), where α, β, γ and δ are real parameters, which was considered in [4]. Another interesting example with pendulum-type behaviour is the equation x + sin(x) = εẋ cos(nx), n ∈ N, considered by Morozov in [17]. The author proves that for ε > 0 small enough this system has exactly n − 1 hyperbolic limit cycles in R 0 , and no limit cycles in R ± . The proof relies on a representation of the Abelian integrals in terms of polynomials and the complete elliptic functions of first and second kind.
A further example of a pendulum-like equation is the Josephson equation where ε > 0 is a small parameter and a, γ ∈ R. This equation was studied by various authors [1,12,19,20] by analyzing the corresponding averaged system whose right-hand side consists of three Abelian integrals. Instead of expressing these integrals in terms of complete elliptic integrals the authors of [20] use techniques from bifurcation theory to find the bifurcation diagram and corresponding phase portraits on the cylinder for the resulting two-parameter family of vector fields. Realizing that the aforementioned Abelian integrals satisfy a certain Picard-Fuchs equation, they then analyze the solutions of the resulting Riccati equations. Another very related problem is the study of the periodic solutions of the perturbed whirling pendulum, performed in [16], with α and γ real parameters and ε a small parameter. Notice that in this problem the unperturbed Hamiltonian is not system (2).
To state our results we first fix some notation and definitions. A Fourier polynomial of degree n is an element of the 2n dimensional real linear space generated by 1, sin(x), . . . sin(nx), cos(x), . . . , cos(nx). It is well-known that this space is the same as the space of degree n two variable polynomials in (sin(x), cos(x)). Given a Fourier polynomial P (x) = n i=0 a i sin(ix) + b i cos(ix) we denote by P e (x) its even part, that is, P e (x) = n i=0 b i cos(ix). Note that any even Fourier polynomial of degree n can be equivalently written as a degree n polynomial in cos(x). At different points in the paper we will choose the expression more suitable for our respective interest. From now on we denote by E(x) the integer part for any real number x.
Our first main result gives general upper bounds for the number of zeros of the first Melnikov integral.
Theorem A. Consider the system where Q n,s are Fourier polynomials of degree n and let M 0 : (0, 2) −→ R, and M ± : (2, ∞) −→ R be their associated first Melnikov functions defined by in R 0 and R ± , respectively. Then the following statements hold: Moreover, if M + (h) is not identically zero in (2, ∞) then it has at most 2n+2m+E (m/2)+2 zeros counting multiplicity in the interval (2, ∞). The same result holds for M − (h).
Moreover, if M 0 (h) is not identically zero in (0, 2) then it has at most 2n + 2E((m − 1)/2) + 1 zeros counting multiplicity in (0, 2) The bounds given in Theorem A are not optimal. In the following two theorems, we give optimal bounds for some particular cases in the oscillatory region (Theorem B) as well as the rotatory region (Theorem C). To this end, given two natural numbers s 1 ≤ s 2 , we denote Set r = o(s 1 , s 2 ) and ℓ = l(s 1 , s 2 ) when r ≥ 0. Then, (a) If r = −1 then the system has a center at the origin for all ε, and no limit cycles bifurcates from R 0 .
n,ℓ+2s (x)y ℓ+2s dx and it holds that: (b 1 ) If 0 ≤ r < (ℓ + 3)/2 and M 0 is not identically zero then it has at most n + 2r zeros counting multiplicity. Moreover, if r ≤ 2 and n > 0 then there exist even Fourier polynomials Q n,s (x) such that the Melnikov function has exactly n + 2r zeros counting multiplicity. (b 2 ) If s 1 = s 2 is odd then there are at most n limit cycles that bifurcate from the period annulus. This bound is optimal.
Item (b 1 ) in Theorem B gives upper bounds for the number of zeros of the first Melnikov function in R 0 . Statement (b 1 ) also says that these bounds are optimal when n > 0 and r ≤ 2. In fact, we think that they are optimal for all r when n > 0, but we have not been successful in proving it. In the case n = 0 these bounds are not optimal because of the following result: Proposition 1.1. The system ẋ = y, y = − sin(x) + ε r s=0 a s y 2s+1 , r ≤ 30, has at most r limit cycles bifurcating from the period annulus R 0 .
We suspect that the proposition holds for all r, but our proof relies on huge explicit computations showing that the family is an extended complete Chebyshev system in (0, 2). We have performed them only until r = 30.
The next theorem gives bounds for the number of limit cycles bifurcating in rotatory region R ± .
Theorem C. Consider the system ẋ = y, y = − sin(x) + ε Q n (x)y 2p+1 + r s=0 Q n,s (x)y 2s , where p, r ∈ N and Q n,s and Q n are Fourier polynomials of degree n and let M ± : (2, ∞) −→ R be its associated first Melnikov functions on the rotary regions R ± . Assume also that M ± (h) is not identically zero. Then it has at most n + r + 1 zeros in (2, ∞), counting multiplicity. This bound is optimal on each of the regions R − and R + . Moreover, this upper bound can be reduced to r when Q n (x) ≡ 0 and to n when Q n,0 (x) ≡ Q n,1 (x) ≡ · · · Q n,r (x) ≡ 0. These upper bounds are also sharp on each of the regions R − and R + .
When finishing this paper we became aware of the book [18] where similar questions are treated in detail. To compare these results with ours we apply the above theorems to the simple example (5) ẋ = y, y = − sin(x) + ε 3 s=0 a s + b s sin(x) + c s cos(x) y s , where a s , b s , c s ∈ R. In the notation of Theorem A, n = 1 and m = 3. Using item (b) of Theorem A we get that in the oscillatory region the maximum number of zeroes, counting multiplicity, of a nonvanishing Melnikov function M 0 is 2n + 2E((m − 1)/2) + 1 = 5. Item (b 1 ) of Theorem B improves this upper bound. Indeed, in the notation of Theorem B, n = 1, s 1 = 0 and s 2 = 3. Then r = o(0, 3) = 1, ℓ = l(0, 3) = 1 and the hypothesis 0 ≤ r < (ℓ + 3)/2 = 2 holds. Thus, the maximum number of zeros, counting multiplicity, is n + 2r = 3. Moreover, since r ≤ 2 and n > 0, this upper bound is sharp.
Contrary to our findings, Theorem 4.10 of [18, p. 135] asserts that for the general system (4) the number of zeros of M 0 in the oscillatory region R 0 is at most n + E((m − 1)/2). Applying his result to system (5) gives an upper bound of 2. Therefore, our results show that Theorem 4.10 is not correct. We want to point out that this can be seen directly without using Theorem B by choosing some parameters for which the corresponding Melnikov function M 0 has at least 3 zeros. In this situation we have that in R 0 , and there exist values a 1 , c 1 , a 3 and c 3 such that M 0 has at least 3 simple zeros in (0, 2) since these four Abelian integrals are linearly independent.
In fact, the line of arguments in the beginning of our proof of Theorem A is similar to the one of the proofs in [17,18]. The Abelian integral I(h) associated to (4) can be expressed, in the rotary and the oscillatory regions, in terms of polynomials and the complete elliptic functions of first and second kind, satisfying certain recurrence relations. We then use this result together with an upper bound on the number of zeros of functions of the form P (k)E(k) + Q(k)K(k) in (−1, 1), where P and Q are polynomials given in [6].
In contrast, Morozov studied these functions directly by complexifying the variable k, and applying the argument principle to a suitable domain. This method is, indeed, the one used to prove the results in [6]. So the inaccuracy of the upper bounds given in [18] appears to originate from some of the steps in the analysis of these complexified functions.
The proofs of Theorems B and C are based on criteria developed in [9] and [7], respectively. Both proofs show that certain subfamilies of Abelian integrals associated to (4) form a Chebyshev system. The paper is organized as follows: In Section 2 we prove Theorem A, in Section 3 we give the proof of Theorem B, while Section 4 addresses the proof of Theorem C. Section 5 is devoted to simultaneous bifurcation of limit cycles. Notice that our main results give bounds for the number of zeros of the corresponding Abelian integrals on each of the regions R 0 and R ± by studying them separately. We end the paper with some comments and results showing the difficulties of studying the coexistence of limit cycles in these three regions. This problem has also been addressed briefly in [17,20] for some particular cases of system (4).

Proof of Theorem A
We begin by studying Abelian integrals of the type where r, n ∈ N and γ h ⊂ {y 2 /2 + 1 − cos(x) = h}. We denote where h ∈ (0, ∞) and the integration boundary is given by Furthermore, we denote by I 0 n,r and I + n,r the restrictions of I n,r to the intervals (0, 2) and (2, ∞), respectively. Moreover, we denote I − n,r = (−1) r I + n,r . These integrals coincide, except for a multiplicative constant with the corresponding Abelian integral (7).
As we will see, the integrals I n,r (h) can be written in terms of the complete elliptic integrals of first and second kind K and E, see (6). Our computations to prove this fact are inspired by [17] and use several well-known properties of these elliptic functions, see [3,8].
To prove this property for I n,r (h), it is essential that K and E are closed under derivation, that is, expressions of the form where f and g are rational functions in k, remain of this form after differentiation with respect to k. This is due to the fact that the elliptic functions K and E satisfy the Picard-Fuchs equations see [3], formulas 710.00 and 710.02. Once we are able to express the integrals (8) in terms of E and K, we may use a result derived in [6] which provides an upper bound on the number of zeros of expressions of the form (9).
Theorem 2.1 (Theorem 1 in [6]). Let f and g be real polynomials of degree at most n and m, respectively, and let k ∈ (−1, 1). An upper bound for the number of zeros of the function f (k)K(k)+g(k)E(k), taking into account their multiplicity, is n + m + 2.
The next Lemma shows that for n = 0, 1 the integrals I 0 n,1 and I + n,1 can be expressed as combinations of E and K with polynomial coefficients. (8). Then the following statements hold:

Proof. (A) The classical change of variables
see [3], allows us to rewrite the first integral L 0 0 as Notice that Moreover, in view of the fact that E and K satisfy the differential equations . Therefore, which proves the first assertion in (A). The first statement in (B) is a straightforward calculation. Indeed, To show the second statements in (A) and (B), we make a general observation which is true in both cases, that is, in the oscillatory as well as the rotary region. We drop the superscripts for lighter notation, and notice that in view of To prove the second statement of (A) we proceed by making an Ansatz of Differentiating this expression and equating it with the right-hand side of (11), we obtain a linear system of differential equations in a(h) and b(h). Comparing coefficients of K and E we obtain the solution Moreover, a simple computation shows that To prove the second statement in (B) we proceed as in case (A), and make an Ansatz of the form We then solve the corresponding system of differential equations and obtain the solution This ends the proof of the Lemma.
is a homogeneous polynomial of degree m with V 0 (t, s) = 1/(r + 1) and Proof. Let us denote V m = V m (t, s) and for m ∈ N. Integrating by parts and rearranging the terms we find that Now the claim follows by induction. Indeed, a direct calculation shows that U 0 = 1 r+1 (t + s) r+1 = (t + s) r+1 V 0 and similarly U 1 = (t + s) r+1 V 1 . Now assume that statement (12) holds for all i ≤ m. Then, integrating by parts we find that which yields where we have used the induction hypothesis in the second equality of the above expression.
Now we are ready to prove the desired expression for I n,1 (h) with any n ∈ N.
Proposition 2.4. Denote L 0 n (h) = I 0 n,1 (h) and L + n (h) = I + n,1 (h), see (8). Then there exist real polynomials P 0 n , P + n , Q 0 n and Q + n of degree n ∈ N such that Proof. Lemma 2.2 proves the result for n = 0, 1. Now we claim that for n > 1 we have where a i (h) are polynomials with degree i. Note that We want to perform integration by parts in the second integral. To this end, we use Lemma 2.3 with r = 1/2 and s = h − 1 to obtain where P m+1 is homogeneous of degree m + 1 with P m+1 (z, 0) = 2 3+2m z m+1 . Therefore, we have that and the claim is proved. Now the proposition follows directly by induction.
Lemma 2.5. Let k 2 = 2/h and for h ∈ (2, ∞) consider where P m , Q m are real polynomials of degree m. Then, the n th -derivative of this expression is given by for all n ∈ N, where P m+n and Q m+n are real polynomials of degree m + n.
Proof. The equality is obviously true for n = 0. The result follows directly by induction using (10).
Proposition 2.6. Let h ∈ (2, ∞), k 2 = 2/h and n, s, r ∈ N. Then there exist polynomials Z s , P n+r and Q n+r of degrees s and n+r such that I + n,2s (h) = Z s (h) and I + n,2r+1 (h) = √ h (P n+r (h)K(k) + Q n+r (h)E(k)) . Moreover, any nontrivial function of the form whereZ s ,P n+r andQ n+r are also polynomials with respective degrees s, n + r and n + r, has at most 2(n + r) + 3s + 4 zeros, counting multiplicity. Proof. For lighter notation we drop the superscripts and observe that where Z s is a real polynomial of degree s. Furthermore, in view of Proposition 2.4. Thus, the first statement of the Proposition is proved. Differentiating the above expression s + 1 times using Lemma 2.5, we obtain Thus, any zero of I (s+1) n,2r+1 (h) corresponds to a positive zero of P n+r+s+1 2/k 2 K(k) + Q n+r+s+1 2/k 2 E(k), which is also a positive zero of for certain even polynomials P 2(n+r+s+1) and Q 2(n+r+s+1) of degree 2(n+r +s+ 1). By Theorem 2.1 we know that the number of zeros of this last expression in (−1, 1) is bounded by 4(n+r+s+1)+2. Since the expression is even, we obtain that the number of zeros of I Then I 0 n,2r+1 (h) = P n+r (h)K(k) + Q n+r (h)E(k) for certain polynomials P n+r and Q n+r of degree n + r. Moreover, the number of zeros of I 0 n,2r+1 (h), counting multiplicity, is less than or equal to 2(n + r) + 1.
Proof. For r = 0, we know that I 0 n,1 (h) = L 0 n (h) = P n ( 2 k 2 )K(k) + Q n ( 2 k 2 )E(k), in view of Proposition 2.4. For r ≥ 1 we find that for certain even polynomials P 2(n+r) and Q 2(n+r) of degree 2(n + r). In view of Theorem 2.1 we conclude that P 2(n+r) (k)K(k) + Q 2(n+r) (k)E(k) has at most 4(n + r) + 2 zeros in (−1, 1). Since this expression is even we conclude that it has at most 2(n + r) + 1 positive zeros and the result follows.

Proof of Theorem
for some polynomials Z s , P n+r and Q n+r of degree s and n+ r. Here s and r are the largest natural numbers such that 2s ≤ m and 2r + 1 ≤ m, respectively. That is, s = E( m 2 ) and r = E( m−1 2 ). From Proposition 2.6 we obtain that the number of zeros of M + (h) in (2, ∞) is bounded by To prove the first statement of item (b) we note that for any h ∈ (0, 2), for any i ∈ N and for any smooth function f we have that γ h f (x)y 2i dx = 0. This is a direct consequence of the symmetry with respect the x-axis of the orbit γ h and Green's Theorem. Indeed, we have 2if (x)y 2i−1 dx dy = 0.
To finish the proof of the first statement we need to show that for any j, i ∈ N we have γ h sin(jx)y 2i+1 dx ≡ 0 on (0, 2). Again this a consequence of Green's Theorem and the symmetry (this time with respect to the y-axis) of the orbit γ h , since Lastly, setting k = h/2 we obtain from Proposition 2.7 that for certain polynomials P n+r and Q n+r of degree n + r. Now r is the largest integer satisfying 2r + 1 ≤ m, that is, r = E( m−1 2 ). Then, using again Proposition 2.7 we obtain that the number of zeros of M 0 (h) in (0, 2) is bounded by 2 n + E( m−1 2 ) + 1. This ends the proof of Theorem A.

Proof of Theorem B
We start with some definitions and known results. has at most n − 1 isolated zeros on L.
is an extended complete Chebyshev system (ECT-system) on L if, for all k = 1, 2, . . . n, any nontrivial linear combination has at most k − 1 isolated zeros on L counting multiplicity.
It is clear that if (f 0 , f 1 , . . . f n−1 ) is an ECT-system on L, then it is also a CT-system on L. However, the reverse implication is not true in general. In order to show that a set of functions is a T-system, the notion of the Wronskian proves to be extremely useful.
For the sake of brevity we use the shorthand x 0 , x 1 , . . . , x k−1 = x k . Recall that if the functions f i are linearly dependent, so are the columns of W and therefore W [f k ] = 0. The reverse implication is not true in general. However, if the f i are analytic then the vanishing of W implies linear dependence (This is due to Peano, and there are other, more sophisticated criteria due Bocher, Wollson, etc.). The next result is well-known, cf. [13]. To study the limit cycles of equation (1) in the oscillatory region R 0 we will repeatedly use a result introduced by Grau, Mañosas and Villadelprat in [9], which we state in Theorem 3.4 below. It allows one to deduce Chebyshev properties for certain Abelian integrals from the Chebyshev properties of the corresponding integrands. We state here this result for the particular case of potential even systems. To fix notation, consider V an analytic even function defined in a neighborhood of the origin that has a local non degenerate minimum at 0, and assume that Consider the associated Hamiltonian system given by H(x, y) = y 2 /2 + V (x). Then the origin of R 2 is a critical point of center type and there exists a punctured neighborhood P, the so-called period annulus, of the origin which is foliated by ovals γ h ⊂ {H(x, y) = h}. Thus, the set of ovals inside the period annulus can be parametrized by the energy levels h ∈ (0, h 0 ) for some h 0 ∈ (0, ∞] and the projection on the x-axis of the period annulus is a symmetric interval (−x r , x r ) with V (x r ) = h 0 . The following result plays a key role in our analysis.
The authors of [9] point out that if the condition s > n − 2 does not hold, there is a procedure to obtain a new expression for the same set of Abelian integrals for which the corresponding s is large enough to verify the inequality. We review this result here (stated for potential systems) for the convenience of the reader. Lemma 3.5 (Lemma 4.1 in [9]). Let γ h be an oval inside the level curve {y 2 /2 + V (x) = h}, and consider a function F such that F/V ′ is analytic at x = 0. Then, for any k ∈ N, We now apply this theory to Hamiltonian systems (2) corresponding to pendulum-like equations of type (1), with total energy given by (3). The following is a useful auxiliary result. Lemma 3.6. For any j ∈ N, consider the functions I j : (0, 2) −→ R defined by sin 2p (x) y 2v−1 dx where p, v ∈ N and γ h = {y 2 /2 + 1 − cos(x)}. If q < v + 1 then the family (I 0 , I 1 , . . . I q ) is an ECT-system on (0, 2). Consequently, if q < v + 1 then for any polynomial P of degree q the Abelian integral has at most q isolated zeros in (0, 2), counting multiplicity, and it is identically zero if and only if P is identically zero.
Due to the particular structure of the Hamiltonian under consideration, the function G in Lemma 3.5 has a rather simple form which reveals an interesting structural property of the Abelian integrals (13). To see this, let A be the set of real analytic functions on (0, π) and consider the linear operator L : A −→ A defined as (14) L From now on L j denotes the composition of the operator j times and L 0 = id.
Proof. The proof of statement (A) is a straightforward induction in j using the fact that the operator L is linear which we omit for the sake of brevity. The proof of statement (B) follows by induction as well. We start with the base case when j = m and the claim that where the real number K(m) is defined as above. To prove the claim, let us start with some preliminary considerations. Notice that, in view of the identity cos(mx) = cos((m − 1)x) cos(x) − sin((m − 1)x) sin(x) and the fact that the operator L is linear, we find that where we have used statement (A) in the first equality. In view of this relation and using statement (A) once more, we obtain that We are now ready to prove the claim by induction in m. The base case for m = 0 holds trivially with K(0) = 1. Assuming that the statement is true for all m ∈ N, we prove the inductive step using the identities (17) and (18) derived above. Indeed, which proves the claim with K(m + 1) = −(2m + 1)K(m) as defined in (16). Let us proceed with the proof of statement (B). Assuming that this statement holds for j ∈ N, the inductive step follows immediately from the definition of L . Indeed, which in view of (15) concludes the proof.
Proof of Theorem B. First we prove statement (a). If r = −1 then s 1 = s 2 = 2p for some natural integer p. Then the result follows directly from the fact that in this situation the system (4) is reversible with respect the x-axis and therefore it has a center at the origin for all ε.
(b) Now r ≥ 0. The above argumentation shows that all Abelian integrals γ Q n,s y s are identically zero when s is even. We obtain that Moreover, since γ h sin(jx)y s dx is identically zero on (0, 2) for all j ≥ 0 by Green's Theorem, we obtain Q e n,ℓ+2s (x) y ℓ+2s dx.
Now we prove item (b 1 ). In view of Lemma 3.5 and using the operator L defined in (14) we may write L n+r−s Q e n,ℓ+2s (x) y ℓ+2n+2r dx.

From Lemma 3.7 (B) it follows that
L n+r−s Q e n,s (x) = R s (cos(x)) sin 2(n+r−s) (x) , for certain polynomials R s of degree n + r − s. Thus, we obtain where R(u) = r s=0 R s (u)(1 − u 2 ) s is a polynomial of degree n + 2r. Since 2r < ℓ + 3, the first part of statement (b 1 ) follows from Lemma 3.6.
To prove the second part we need to show that for r ≤ 2 using the above procedure we can obtain any prescribed polynomial R(u) of degree n + 2r. For r = 0 this follows because R(u) = R 0 (u), which is defined by Thus, for i = 0, . . . , n choosing Q n,0 (x) = cos(ix) we obtain R(x) = P n,i (x) which is a polynomial of degree exactly n−i. Clearly the set {P n,0 , P n,1 , . . . , P n,n } is a basis of the polynomials of degree n. This shows that there exists a linear combination of perturbations Q n,0 for which the corresponding Melnikov function has exactly n zeros. This proves the case r = 0. When r = 1 we have that R(u) = R 0 (u) + R 1 (u)(1 − u 2 ), where R 0 (u) and R 1 (u) are defined by L n+1 Q e n,0 (x) = R 0 (cos(x)) sin 2(n+1) (x) and L n Q e n,1 (x) = respectively. Choosing Q n,0 (x) = cos((n−1)x) (2n−1)K(n−1) and Q n,1 (x) = 2n cos(nx) we get which is a degree 0 polynomial. On the other hand choosing Q n,1 (x) = 0 and Q n,0 (x) = cos(ix) for i = 0, . . . n, we obtain that R is a polynomial of degree i + 1. Lastly, choosing Q n,1 (x) = 1 and Q n,0 (x) = cos(ix) we obtain that R is a polynomial of degree n + 2. These choices give a basis of the polynomials of degree n + 2. The same type of arguments and computations shows the result for r = 2, but we omit these computations for the sake of brevity. Item (b 2 ) follows from the fact that in this case the Melnikov integral is identically zero if and only if Q n,s 1 depends only on sin(x), i.e. Q e n,s 1 = 0, in which case the system is reversible and has a center at the origin for all ε.
Proof of Proposition 1.1. The proof is conducted along the lines of the proof of Theorem B and involves a lot of computations. For the sake of brevity we only give details in the case r = 2. We need to study the number of zeros of linear combinations of I 0,2s−1 (y) = γ h y 2s−1 dx. By Lemma 3.5, using the notation of Theorem 3.4 and the operator L given in (14) it holds that Simple computations give that These functions are even and well-defined in (0, π). Notice that the three integrals I 0,s , s = 1, 2, 3 all involve the term y 5 . Therefore, following the notation of Clearly, each one of them does not vanish on (0, π) and it holds that n < v + 2. Therefore, we can apply Theorem 3.4, proving that the functions I 0,1 , I 0,3 and I 0,5 are an ECT-system on (0, 2).

Proof of Theorem C
To study the limit cycles in the rotary regions R ± we resort to a result of Gasull, Li and Torregrosa published in [7]. In this paper, the authors introduce the family of analytic functions where g is a continuous function, a, b ∈ R and α ∈ R \ Z − . These functions are defined on the open interval W where 1 − yg(x) > 0 for all x ∈ [a, b]. They prove: Theorem 4.1 (Theorem A in [7]). For any n ∈ N and any α ∈ R \ Z − , the ordered set of functions (J 0,α , . . . , J n,α ), as defined in (19), is an ECT-system on W .
Proof of Theorem C. We prove the result for M + (h), the proof for M − follows in the same way. From Theorem A we have that Q e n,s (x)y 2s dx, and direct computations imply that for a certain polynomial Z r of degree r. Furthermore, we have that for some constants a i , b i ∈ R. So M + (h) belongs to the linear space generated by 1, h, . . . , h r , I 0,p (h), . . . , I n,p (h) where I i,p (h) = π 0 cos i (x)(h − 1 + cos(x)) (2p+1)/2 dx are given in (8). Therefore, it suffices to show that the family (1, h, . . . , h r , I 0,p (h), . . . , I n,p (h)) is an ECT-system. To this end, let k ≤ n and consider ϕ(h) = r i=0 a i h i + k i=0 c i I i,p (h). Then ϕ (r+1) (h) = k i=0 d i I i,p+r+1 (h), and Proposition 4.2 implies that either ϕ (r+1) (h) is identically zero or it has at most k zeros counting multiplicity. From Rolle's Theorem we obtain that either ϕ(h) is identically zero or it has at most k + r + 1 zeros counting multiplicity.
The proofs of the cases Q n (x) ≡ 0 or Q n,0 (x) ≡ Q n,1 (x) ≡ · · · Q n,r (x) ≡ 0 are much easier and follow by using the same arguments.

Simultaneous bifurcation of limit cycles
The point here is to study the maximum number of limit cycles which may bifurcate simultaneously in the entire cylinder from the periodic orbits of the integrable pendulum, i.e. in R 0 ∪ R ± . Notice that this region corresponds to all h ∈ (0, ∞) \ {2}. To this end, we introduce the following notation: given a family of systems of the form (4) we will say that it admits the configuration of limit cycles [c − ; c 0 ; c + ], where c − , c 0 and c + are nonnegative integers, if there exist values of the parameters of the system such that the three first order Melnikov integrals associated to it, M − (h), M 0 (h) and M + (h) have c − , c 0 and c + simple zeros, respectively, all of them lying in the corresponding intervals of definition of the Melnikov functions, that is, (2, ∞), (0, 2) and (2, ∞) respectively. With this notation, the results of Theorem A imply that the configuration with the largest number of limit cycles, in case it is realizable, would be [2n+2m+E(m/2)+2; 2n+2E((m−1)/2)+1; 2n+2m+E(m/2)+2].
Even if each of the values of a configuration is optimal, to know when all maximal values are attained simultaneously is a very intricate problem. In the results of [17,20] for some subcases of system (4) the maximal values are not attained simultaneously, but it may happen for similar systems, see for instance [5]. In this section we give some examples which illustrate that for other simple cases of system (4) the global optimal values are not attained simultaneously in the three regions. We believe that this general question is of interest and deserves further work.
Our first example is the subfamily of systems of the form (4), given by (20) ẋ = y, y = − sin(x) + ε(a 0 + a 1 cos(x))y, with (a 0 , a 1 ) ∈ R 2 . From Theorems B and C we get that the configuration with the largest number of limit cycles is Let us prove that the only two possible configurations for limit cycles of system (20) are [1; 0; 1] and [0; 1; 0].
It is clear that c + = c − because M + (h) = −M − (h). So, we only need to prove that M + and M 0 can not simultaneously have a zero in their respective intervals of definition. But this is a straightforward consequence of the fact that Q is globally decreasing.
Notice that the above approach works for two integrals due to the nice analogy between the non-vanishing Wronskians and the monotonicity of the quotients. The generalization to an arbitrary number of integrals however is far from obvious.
From the above inequalities it is not difficult to prove that when r = 0 the realizable configurations with a maximal number of limit cycles for system (21)