On the cyclicity of Kolmogorov polycycles

In this paper we study planar polynomial Kolmogorov's differential systems \[ X_\mu\quad\sist{xf(x,y;\mu),}{yg(x,y;\mu),} \] with the parameter $\mu$ varying in an open subset $\Lambda\subset\R^N$. Compactifying $X_\mu$ to the Poincar\'e disc, the boundary of the first quadrant is an invariant triangle $\Gamma$, that we assume to be a hyperbolic polycycle with exactly three saddle points at its vertices for all $\mu\in\Lambda.$ We are interested in the cyclicity of $\Gamma$ inside the family $\{X_\mu\}_{\mu\in\Lambda},$ i.e., the number of limit cycles that bifurcate from $\Gamma$ as we perturb $\mu.$ In our main result we define three functions that play the same role for the cyclicity of the polycycle as the first three Lyapunov quantities for the cyclicity of a focus. As an application we study two cubic Kolmogorov families, with $N=3$ and $N=5$, and in both cases we are able to determine the cyclicity of the polycycle for all $\mu\in\Lambda,$ including those parameters for which the return map along $\Gamma$ is the identity.


Introduction and main results
The present paper is motivated by the results obtained by Gasull, Mañosa and Mañosas [8] with regard to the stability of an unbounded polycycle Γ in the Kolmogorov's polynomial differential systems ẋ = xf (x, y), ẏ = yg(x, y).
These systems are widely used in ecology to describe the interaction between two populations, see [19] for instance.That being said, the stability of the polycycle is not the main issue to which this paper is addressed.Indeed, assuming that the coefficients of the polynomials f and g depend analytically on a parameter µ, we are interested in the cyclicity of the polycycle (see Definition 1.2 below), which roughly speaking is the number of limit cycles that can bifurcate from Γ as we perturb µ.In our main result (Theorem A) we define three functions, d 1 (µ), d 2 (µ) and d 3 (µ), that play the same role for the cyclicity of the polycycle as the first three Lyapunov quantities for the cyclicity of a focus.Recall that the displacement map can be analytically extended to a focus and that the Lyapunov quantities are the coefficients of its Taylor's series.On the contrary the displacement map has no smooth extension to a polycycle.At best one can hope that it has some asymptotic expansion.This is indeed the case for the polycycle that we study in the present paper and in order to obtain it we strongly rely in our previous results [14,15,16] about the asymptotic expansion of the Dulac map of an unfolding of hyperbolic saddles.The principal part of the asymptotic expansion of the displacement map is given in a monomial scale containing a deformation of the logarithm, the so-called Ecalle-Roussarie compensator, and the remainder is uniformly flat with respect to the parameters.The functions d i (µ) in Theorem A are essentially the coefficients of the first three monomials in the principal part, which explains their relation with the cyclicity.For other results regarding the cyclicity of polycycles and more general limit periodic sets the reader is referred to [5,6,11,20] and references therein.
Most of the work on planar polynomial differential systems, including this paper, is related to the questions surrounding Hilbert's 16th problem (see for instance [10,22,23] and references therein) and its various weakened versions.In this setting it is worth to mention that, using a compactness argument, Roussarie [21] showed that to prove the existential part of Hilbert's 16 problem in the family P n of all polynomial vector fields of degree n it is sufficient to show that each limit periodic set in P n has finite cyclicity.
Definition 1.1.Let X be a vector field on R 2 (or S 2 ).A graphic Γ for X is a compact, non-empty invariant subset which is a continuous image of S 1 and consists of a finite number of isolated singular points {p 1 , . . ., p m , p m+1 = p 1 } (not necessarily distinct) and compatibly oriented separatrices {s 1 , . . ., s m } connecting them (i.e., such that the α-limit set of s j is p j and the ω-limit set of s j is p j+1 ).A graphic is said to be hyperbolic if all its singular points are hyperbolic saddles.A polycyle is a graphic with a return map defined on one of its sides.
The polycycle that we aim to study is unbounded.In order to investigate the behaviour of the trajectories of a polynomial vector field Y near infinity we can consider its Poincaré compactification p(Y ), see [2, §5] for details, which is an analytically equivalent vector field defined on the sphere S 2 .The points at infinity of R 2 are in bijective correspondence with the points of the equator of S 2 , that we denote by ∞ .Furthermore the trajectories of p(Y ) in S 2 are symmetric with respect to the origin and so it suffices to draw its flow in the closed northern hemisphere only, the so called Poincaré disc.Definition 1.2.Let {X µ } µ∈Λ be a family of vector fields on S 2 and suppose that Γ is a polycyle for X µ0 .We say that Γ has finite cyclicity in the family {X µ } µ∈Λ if there exist κ ∈ N, ε > 0 and δ > 0 such that any X µ with µ − µ 0 < δ has at most κ limit cycles γ i with dist H (Γ, γ i ) < ε.The minimum of such κ when ε and δ go to zero is called the cyclicity of Γ in {X µ } µ∈Λ and denoted by Cycl (Γ, X µ0 ), X µ .
In this paper we consider the family of vector fields {X µ } µ∈Λ given by X µ := f (x, y; µ)x∂ x + g(x, y; µ)y∂ y (1) where Λ is an open subset of R N and f and g are polynomials in x and y of degree n ∈ N with the coefficients depending analytically on µ.The standing hypothesis on the family {X µ } µ∈Λ are the following: H1 f (z, 0; µ) > 0, g(0, z; µ) < 0 and f n − g n (1, z; µ) < 0 for all z > 0 and µ ∈ Λ.
Here, and in what follows, f n and g n denote, respectively, the homogeneous part of degree n of f and g.Conditions H1 and H2 guarantee that, after compactifying the polynomial vector field X µ to the Poincaré disc, the boundary of the first quadrant is a polycyle with three hyperbolic saddles, see Figure 1,  From now on we shall denote this polycyle by Γ, that we remark is a compact subset of the Poincaré disc.The hyperbolicity ratios of the saddles at its vertices are precisely the ones given in H2.We also define: We point out that all these functions depend on the parameter µ.This dependence is omitted for the sake of shortness when there is no risk of confusion.
We can now state our main result, which is addressed to the cyclicity of the polycycle Γ inside the polynomial family {X µ } µ∈Λ .More formally, we should refer to the compactified family {p(X µ )} µ∈Λ of vector fields on S 2 but for the simplicity in the exposition we commit an abuse of language by identifying both families.It is clear that the number of limit cycles of p(X µ ) and X µ is the same.In the statement R( • ; µ) stands for the return map of the vector field X µ around the polycycle Γ (see Figure 1) and we use the notion of functional independence that is given in Definition 2.8.
Theorem A. Let us consider the family of Kolmogorov polynomial vector fields {X µ } µ∈Λ given in (1) and verifying the assumptions H1 and H2.Then, for any µ 0 ∈ Λ, the following assertions hold with regard to the cyclicity of the polycycle Γ inside the family: (1) does not vanish at µ 0 then Cycl (Γ, X µ0 ), X µ 1.
(f ) If d 1 , d 2 and d 3 vanish and are independent at µ 0 and R( Let us make some remarks with regard to the regularity of the functions d 1 , d 2 and d 3 defined in the statement.On account of the hypothesis H1 and H2 it is evident that d 1 is analytic on the whole parameter space Λ.On the other hand, d 2 is defined in terms of the functions µ → L ij (1), which in turn are given by some (apparently) improper integrals.By applying the Weierstrass Division Theorem one can easily show that each L ij (1) is an analytic strictly positive function, so that d 2 is also analytic on Λ.Finally, d 3 is given by means of a sort of incomplete Mellin transform (which is defined in Proposition 2.5) of the functions M 1 and M 3 in (2).One can show that the hypothesis H1 and H2 imply that each M i (u; µ) is analytic on (−ε, +∞) × Λ for some ε > 0. Taking this into account, by applying (d) in Proposition 2.5 it follows that d 3 is a meromorphic function on Λ having poles only at those Also with regard to the statement of Theorem A, the assertions (e) and (f ) hold under the assumptions λ 1 (µ 0 ) < 1, λ 2 (µ 0 ) > 1 and λ 3 (µ 0 ) > 1. However one can always reduce to this case provided that λ i (µ 0 ) = 1 for i = 1, 2, 3 by means of a rescaling of time and a projective change of coordinates that permute conveniently the three singular points of the polycycle.
The paper [8] constitutes an important previous contribution to the study of Kolmogorov polycycles that should be referred.Indeed, following our notations and definitions, the authors prove (see [8,Theorem 1]) that if d 1 (µ 0 ) = 0 then the return map of X µ0 around the polycycle Γ is of the form cf.(b) in Theorem 2.6, and they also provide the explicit expression of the coefficient ∆.This coefficient is given as the limit of a sum of three improper integrals, which computed separately diverge.An easy manipulation of the integrals shows that these divergences cancel each other, yielding to the expression of d 2 given in Theorem A. It is important to remark that the expansion in (3) can not be used to obtain an upper bound for Cycl (Γ, X µ0 ), X µ because the remainder is not uniform with respect to the parameters.It is possible however to use it to obtain lower bounds.In this direction the authors prove in [8, Corollary 5] that if d 1 vanishes and is independent at The paper is organised in the following way.Section 2 is entirely devoted to prove Theorem A and for that purpose we rely in our previous results about the asymptotic expansion of the Dulac map of a hyperbolic saddle that we obtain in [14,15,16].For this reason, before starting the proof of Theorem A we first state these results and introduce the necessary definitions.The asymptotic expansion of the displacement map near the polycycle is given in Theorem 2.6 and constitutes the fundamental tool in order to prove Theorem A. As a by-product of this expansion we obtain a method to study the stability of the polycycle, see Remark 2.7.Section 3 is addressed to the applications.The first one is Theorem 3.1, where we consider a Kolmogorov's cubic system depending on three parameters that was previously studied in [8].The authors in that paper show that there exist parameters for which the cyclicity of the polycycle is at least 1.In the present paper we obtain the exact cyclicity of all the parameters in the family (that can be 0 or 1), including the case in which the return map along the polycycle is the identity.We also show that there exists exactly one singularity in the first quadrant, which can be a focus or a center, and we compute its cyclicity.Finally we prove that it is not possible a simultaneous bifurcation of limit cycles from the polycycle and that singularity.We give our second application in Theorem 3.2, where we consider a Kolmogorov's cubic system depending on five parameters.In this case we also provide the exact cyclicity of Γ for all the parameters in the family, which again can be 0 or 1.

Proof of Theorem A
In order to tackle the proof of Theorem A we will appeal to some previous results from [14,15,16] about the asymptotic expansion of the Dulac map.For reader's convenience we gather these results in Proposition 2.4.To this end it is first necessary to introduce some new notation and definitions.
Thus, for all ν ∈ Ŵ , the origin is a hyperbolic saddle of X ν with the separatrices lying in the axis.We point out that here the hyperbolicity ratio of the saddle is an independent parameter, although in the proof of Theorem A we will have λ = λ(ν).The reason for this is that the hyperbolicity ratio turns out to be the ruling parameter in our results and, besides, having it uncoupled from the rest of parameters simplifies the notation in the statements.Moreover, for i = 1, 2, we consider a C ω transverse section such that σ 1 (0, ν) ∈ {(0, x 2 ); x 2 > 0} and σ 2 (0, ν) ∈ {(x 1 , 0); x 1 > 0} for all ν ∈ Ŵ .We denote the Dulac map of X ν from Σ 1 to Σ 2 by D( • ; ν), see Figure 2. The asymptotic expansion of D(s; ν) at s = 0 consists of a remainder and a principal part.The principal part is given in a monomial scale that contains a deformation of the logarithm, the so-called Ecalle-Roussarie compensator, whereas the remainder has good flatness properties with respect to the parameters.We next give precise definitions of these key notions.
Definition 2.1.The function defined for s > 0 and α ∈ R by means of  More formally, the definition of C ∞ s>0 (U ) must be thought in terms of germs with respect to relative neighbourhoods of {0} × U in (0, +∞) × U .In doing so C ∞ s>0 (U ) becomes a ring.We can now introduce the notion of flatness that we shall use in the sequel.
K there exist a neighbourhood V of ν0 and C, s 0 > 0 such that Cs L−ν0 for all s ∈ (0, s 0 ) and ν ∈ V .
Apart from the remainder and the monomial order, the most important ingredient for our purposes is the explicit expression of the coefficients in the asymptotic expansion.In order to give them we introduce next some additional notation, where for the sake of shortness the dependence on ν = (λ, ν) is omitted.We define the functions: On the other hand, for shortness as well, we use the compact notation σ ijk for the kth derivative at s = 0 of the jth component of σ i (s; ν), i.e., σ ijk (ν) := ∂ k s σ ij (0; ν).Taking this notation into account we also introduce the following real values, where once again we omit the dependence on ν: Here Mi stands for a sort of incomplete Mellin transform of M i that will be defined by Proposition 2.5 below.We can now state the following result, which gathers Theorem A and Theorem 4.1 in [16] and that it will constitute the key tool in order to prove the main result in the present paper.
The flatness of the remainder can range in a certain interval depending on λ 0 .The left endpoint of this interval is only given for completeness to guarantee that all the monomials in the principal part are relevant (i.e., they cannot be included in the remainder).The important information about the flatness is given by the right endpoint.A key tool in order to give a closed expression of the coefficients ∆ i is the use of a sort of incomplete Mellin transform, which is accurately defined in the next result.For a proof of this result the reader is referred to [16,Appendix B]. where and, moreover, it tends to On account of this result for each M i (u; ν) in (5) we have that (α, u; ν) → Mi (α, u; ν) is a well defined meromorphic function with poles only at α ∈ Z ≥0 .Accordingly, see (6), M1 (1/λ, σ 120 ) and M2 (λ, σ 210 ) are the values (depending on ν) that we obtain by taking M1 (α, u; ν) with α = 1/λ and u = σ 120 (ν) and by taking M2 (α, u; ν) with α = λ and u = σ 210 (ν), respectively.
At this point we get back to the setting treated in the present paper and from now on we recover the original notation for the parameters in the family under consideration, see (1).In order to study the Dulac maps of the hyperbolic saddles at the vertices of the polycycle Γ we take three local transverse sections Σ 1 , Σ 2 , and Σ 3 parametrised, respectively, by s → (1, s), s → (1/s, 1/s) and s → (s, 1) with s > 0. We define D 1 (s; µ) to be the Dulac map of X µ from Σ 1 to Σ 2 , D 2 (s; µ) to be the Dulac map of X µ from Σ 2 to Σ 3 and, finally, D 3 (s; µ) to be the Dulac map of −X µ from Σ 1 to Σ 3 , see Figure 3 x It is then clear that the limit cycles of X µ near Γ are in one to one correspondence with the isolated positive zeroes of near s = 0.The proof of Theorem A strongly relies in our next result, where we get the asymptotic expansion of D(s; µ) at s = 0 and we compute its coefficients.In its statement d i (µ), for i = 1, 2, 3, are the functions defined in Theorem A and R{µ} µ0 stands for the local ring of convergent power series at µ 0 .
Theorem 2.6.Let us fix any µ 0 ∈ Λ and set λ 0 , where a 1 and a 2 are analytic and strictly positive functions on Λ.

So far we have proved the first assertion in (b).
To show the second one, besides λ 0 1 λ 0 2 λ 0 3 = 1, we assume λ 0 1 > 1, λ 0 2 > 1 and λ 0 3 < 1.On account of this we can apply point (3) in Proposition 2.4 to conclude that and Here the first order coefficients ∆ 10 , ∆ 20 and ∆ 30 are the ones already defined in (7), ( 8) and (9), respectively.With regard to the second order coefficients, only the ones of D 1 and D 3 are relevant for our purposes, which are given by and respectively.In each case, on account of (2), this follows easily from the formula ∆ 1 = λ∆ 0 S 1 given in Proposition 2.4 and taking S 1 in (6) particularised to σ 1 (s) = (s, 1).
From (11) and (12), by applying [15, Lemma A.2] we can assert that 2) in the first equality, any 11 ∈ λ 0 1 + 1, λ 0 1 + min(λ 0 1 , 2) in the second one and any 12 ∈ λ 0 1 + 1, min(2λ 0 1 , λ 0 1 + 2, λ 0 1 λ 0 2 ) in the third one.Furthermore in the second equality we use that, for any η = η(µ), for any ε > 0, which in turn follows noting that , where we use again [15, Lemma A.2] taking λ 0 1 > 1 and λ 0 2 > 1 into account.Hence, from (13) and plug in s −α = 1 + αω(s; α) as before, we get where we define The application of the formula given in (15) Thus one can easily verify that the above expression yields to In order to prove this, for the sake of shortness in the next computation we follow the convention that κ stands for an analytic function at µ 0 and κ stands for an analytic strictly positive function at µ 0 .Some easy computations following this convention yield where in the third and fifth equalities we use (14) and , this shows the validity of the claim and completes the proof of the result.
Remark 2.7.There are two important observations to be made about Theorem 2.6: between ideals over the local ring R{µ} µ are satisfied.As a matter of fact, in the proof we show a stronger property, namely that the following holds: where all the entries in the matrix are analytic functions on Λ and each κ i is strictly positive.
Of course this is relevant because we have an explicit expression of these functions by Theorem A. In this regard let us note that the first assertion is well known since d 1 (µ 0 ) < 0 is equivalent to require that λ 0 1 λ 0 2 λ 0 3 > 1, while the second assertion was already proved by Gasull et al. in [8], see Theorem 1.On the contrary the third assertion constitutes a new result to the best of our knowledge.
Clearly R = (r ij ) := BA is also a triangular matrix with coefficients in the local ring R{µ} µ and We claim that, since c 1 , . . ., c n are independent at µ , then r kk (µ ) = 1 for all k = 1, 2, . . ., n.The fact that this is true for k = 1 follows easily by continuity.Let us prove by contradiction that this is also true for k 2. So assume that r kk (µ ) = 1 for some k ∈ {2, . . ., n}.Then the equality where each α i := r ki 1−r kk is an analytic function at µ .This clearly contradicts the assumption that c 1 , . . ., c n are independent at µ (see Definition 2.8).Hence the claim is true and, consequently, det(R) = det(A)det(B) = 1 at µ = µ .This shows, in particular, that A is an invertible matrix in the local ring R{µ} µ and so there exists a neighbourhood U of µ such that a kk (µ) = 0 for all µ ∈ U and k = 1, 2, . . ., n.On account of this, the fact that d 1 , . . ., d n are independent at µ follows easily noting that if we take any two points This completes the proof of the result.
Proof of Theorem A. Let us fix any µ 0 ∈ Λ and set λ 0 i := λ i (µ 0 ) for i = 1, 2, 3. Recall that the limit cycles of X µ near Γ are in one to one correspondence with the isolated positive zeros of is not zero then by applying (a) in Theorem 2.6 we have that, for any , where a 1 and a 2 are analytic and strictly positive functions on Λ.Thus lim (s,µ)→(0,µ0) Since a i (µ 0 ) = 0 for i = 1, 2, this implies the existence of an open neighbourhood U of µ 0 and ε > 0 small enough such that D(s; µ) = 0 for all µ ∈ U and s ∈ (0, ε) when λ 0 1 λ 0 2 λ 0 3 = 1.Hence Cycl (Γ, X µ0 ), X µ = 0 and the assertion in (a) is true.

Applications
We begin this section by revisiting in Theorem 3.1 a family of Kolmogorov differential systems that was first studied in [8], where the authors (following the notation in our statement) prove that if µ 0 = (a 0 , p 0 , q 0 ) verifies p 0 + q 0 = 0 and a 0 = 0 then Cycl (Γ, X µ0 ), X µ 1, cf. assertion (b).
(c) The return map of X µ0 along Γ is the identity if, and only if, a 0 = p 0 + q 0 = 0.In this case Γ is the outer boundary of the period annulus of a center at (x 0 , y 0 ) with that foliates the first quadrant and, moreover, Cycl (Γ, X µ0 ), X µ = 1.
On the other hand the vector field X µ has a unique singularity Q µ = (υ 1 , υ 2 ) in the first quadrant, which is either a focus or a center, and has trace equal to τ (µ Furthermore the following holds: ), X µ = 0 and a sufficient condition for τ (µ 0 ) = 0 to hold is that p 0 + q 0 = 0 and a 0 = 0.
Finally it is not possible a simultaneous bifurcation of limit cycles from Γ and Q µ .
Proof.The assertions in (a) and (b) follow directly by applying Theorem A. Indeed, in this case, following the notation in (1), f (x, y) = 1 + x + x 2 + axy + py 2 and g(x, y) = −1 − y + qx 2 + axy − y 2 , so that Taking this into account, together with p < −1 and q > 1, one can easily check that the assumptions H1 and H2 are verified.As a matter of fact the first assumption holds not only for z > 0 but for all z ∈ R, and this implies that the boundary of each quadrant is a monodromic polycycle for the compactified vector field.Hence, by the Poincaré-Bendixson theorem (see [22] for instance), there exists at least one singularity of X µ inside each one of the four quadrants.Due to deg(f ) = deg(g) = 2, by Bézout's theorem there exists exactly one in each quadrant.From now on we denote the singularity of X µ in the first quadrant by Q µ .That being said, the hyperbolicity ratios of the saddles at Γ are λ 1 = 1 q−1 , λ 2 = −(p + 1) and λ 3 = 1.Consequently the first assertion follows from (a) in Theorem A because The second assertion will follow by applying (b) and (c) in Theorem A. To show this we first recall that and this leads us to the computation of the following improper integrals: These expressions have to be computed assuming that p + q = 0, i.e., λ 1 λ 2 = 1.In doing so we obtain that and Therefore d 2 (a, p, −p) = − aπ 2 is zero if, and only if, a = 0. Taking this into account, the combination of (b) and (c) in Theorem A shows that Cycl (Γ, X µ0 ), X µ = 1 for any µ 0 = (a 0 , p 0 , −p 0 ) with a 0 = 0, as desired.It is important to remark for the forthcoming analysis that by applying the Weierstrass Division Theorem (see for instance [9,12]) we can assert that for some analytic function h.
Next we proceed with the proof of (c).To this aim we fix any µ 0 = (a 0 , p 0 , q 0 ) and apply Theorem 2.6, which gives the asymptotic expansion of D(s; µ) at s = 0 for µ ≈ µ 0 .This result, taking (21) and ( 22) into account, shows that if D(s; µ 0 ) ≡ 0 then a 0 = p 0 + q 0 = 0.In order to prove the converse observe that if µ 0 = (0, p 0 , −p 0 ) then the vector field X µ0 writes as One can easily check that Q µ0 , the only singularity of X µ0 in the first quadrant, is a weak focus at the point (x 0 , y 0 ) with . Furthermore, setting σ(x, y) = (y, x), it turns out that σ X µ0 = −X µ0 and so the vector field is reversible with respect to the straight line y = x.Hence Q µ0 is a center and a straightforward application of the Poincaré-Bendixson theorem shows that its period annulus fills the first quadrant, which in particular implies that D(s; µ 0 ) ≡ 0.
Let us turn now to the proof of the assertions regarding the singularity of X µ at Q µ .The approach here is rather standard and the technical difficulty is that we do not dispose of a feasible expression of the coordinates of Q µ .To overcome this problem we shall parametrise the family of vector fields more conveniently.For reader's convenience we summarise the chain of reparametrisations that we shall perform: For the first one we simply introduce ε = p + q.In the second one we take the coordinates of the singular point Q µ = (υ 1 , υ 2 ) as new parameters, i.e., we isolate a and p from to obtain In this respect we point out that υ 1 and υ 2 are strictly positive because Q µ is inside the first quadrant for all admissible µ.More important, the map ϕ : (a, p, ε) → (υ 1 , υ 2 , ε) is smooth and, taking (25) into account, injective.The smoothness follows by the Inverse Function Theorem since one can check that the determinant of the Jacobian of (υ 1 , υ 2 , ε) → (a, p, ε) is non-zero at the image by ϕ of any admissible parameter.Then one can check that the trace of the Jacobian of the vector field at (x, y) , In other words, τ is (up to a non-vanishing factor) the trace of the vector field.Finally, for convenience, we define ρ = υ1−υ2 2 and σ = υ1+υ2

2
. Observe then that {p + q = a = 0} becomes {ρ = τ = 0}.In what follows, setting μ = (ρ, σ, τ ) for shortness, we denote the vector field by X μ.Let us also remark that the map µ → μ is smooth and injective as a consequence of the previous discussion.
At this point we claim that Q µ is either a focus or a center.To show this we will check that the discriminant D µ of the characteristic polynomial of the Jacobian matrix of X µ at Q µ is strictly negative for all admissible parameter.Indeed, one can verify that D µ expressed in terms of (υ 1 , υ 2 , ε) can be written as , where A i are polynomials of degree 7. Thus D µ = 0 gives two roots ε = εi (υ 1 , υ 2 ), for i = 1, 2, that one can check to be well-defined continuous functions on To see the claim we first prove that, for i = 1, 2, This implies that D µ can not vanish at an admissible parameter due to the assumptions p < −1 and q > 1.
We shall next solve the center-focus problem in the family.With this aim in view, taking a local transversal section at Q µ we consider the displacement map D(s; μ), which extends analytically to s = 0, so that we can compute its Taylor's expansion where the remainder R is o(s 2 ).Recall that the trace of X μ at Q μ is equal to τ u 1 (μ), where u 1 is a unity.The coefficients η i are called the Lyapunov quantities of the focus.We have in particular (see for instance [18, p. 94]) that η 1 (μ) = e τ u1(μ) − 1 = τ u 2 (μ), where u 2 is again a unity.Since the first nonzero coefficient of the expansion is the coefficient of an odd power of s, see [18, p. 94] again, we get that η 2 (μ) = τ 1 (μ) for some analytic function 1 .In order to obtain η 3 we shall appeal to the well-known relation between the Lyapunov and focus quantities which, following the notation in [18, Theorem 6.2.3], we denote by g ii .The first ones are the coefficients in the Taylor's expansion of the displacement map that we already introduced, while the second ones are the obstructions for the existence of a first integral.It occurs that η 2i+1 − πg ii ∈ (g 11 , . . ., g i−1,i−1 ) and, more important for our purposes, that η 3 = πg 11 .On account of this we can compute g 11 instead of η 3 , which is easier to obtain, and in doing so (see [3, p. 29]) we get that In this respect we claim that η 3 (μ) τ =0 = ρh(ρ, σ) with h(ρ, σ) = 0 in case that |ρ| < σ, which corresponds to the admissible values υ 1 , υ 2 > 0 due to ρ = υ1−υ2 2 and σ = υ1+υ2
In order to prove (f ) note that if τ (µ) = 0 and ε = p + q = 0 then, from (26), ρ = υ1−υ2 2 = 0. Hence, from (25), 2aυ 1 υ 2 = 0, which implies a = 0 and shows the first assertion.That being stablished, we have already proved that Q µ0 is a center if, and only if p 0 + q 0 = a 0 = 0. We show next that, in this case, Cycl (Q µ0 , X µ0 ), X µ 1.Indeed, since u 2 is a unity we can consider The upper bound for the cyclicity of Q µ0 in the center case will follow once we prove that there exist s 0 > 0 and an open neighbourhood U of (ρ, σ, τ ) = (0, σ, 0) such that D 1 (s; μ) has at most one zero on (0, s 0 ), counted with multiplicities, for all μ ∈ U with ρ 2 + τ 2 = 0. Recall in this regard that D 1 (s; μ)| ρ=τ =0 ≡ 0 and that, on account of (c > 0 is the first component of Q µ0 .The idea to show this is exactly the same as in the proof of (c) but with less technicalities because the involved functions are analytic at s = 0.The desired property is evident when ρ = 0.In case that ρ = 0 we compute the derivative of D 1 with respect to s to obtain ∂ s D 1 (s; μ) = ρs 2h/u 2 + o(1) .
Let us turn now to the proof of the last assertion in the statement.Observe in this respect that the combination of (a) and (b) together with (d) and (e) shows that a simultaneous bifurcation of limit cycles from Γ and Q µ can only occur if we perturb some µ = (a , p , q ) with a = p + q = 0. We shall prove by contradiction that this is neither possible.So assume that for each n ∈ N there exist µ n = (a n , p n , q n ) and two limit cycles γ n and γ n of the vector field X µn in the first quadrant such that the Hausdorff distances d H (γ n , Γ) and d H (γ n , Q µn ) tend to zero and µ n tends to µ as n → +∞.Let us consider the asymptotic expansion of the displacement map of X µ at the polycycle Γ that we compute in (23) and denote it by D p (s; µ).We also consider its Taylor's expansion near the focus Q µ given in (29) and denote it by D c (s ; µ).Then the assumption implies the existence of two sequences s n → 0 + and s n → 0 + such that D p (s n ; µ n ) = 0 and D c (s n ; µ n ) = 0 for all n ∈ N. We claim that the first equality implies that Indeed, from (24) we have that Thus, due to lim n→+∞ ω(s n ; α(µ n )) = +∞ and h i ∈ F ∞ (0 3 ), we obtain that b2 with κi (µ ) = 0 and, consequently, lim n→+∞ d2(µn) d1(µn) = ∞.This, on account of (21) and (22), gives the limit in (30) and so the claim is true.Recall on the other hand that in order to study the displacement map near the focus Q µ we use a more convenient parametrisation given by μ := (ρ, σ, τ ) = φ(µ).That being said, setting (ρ n , σ n , τ n ) := φ(a n , p n , q n ), similarly as we argue before, the fact that D c (s n ; µ n ) = 0 for all n ∈ N implies from (29) that Let us remark that here we also take into account that u 2 is a unity.We next arrive to contradiction showing that (30) and (31) cannot hold simultaneously.Indeed, one can verify that, setting σ Here, in addition to (31), we use that if p + q and a tend to zero then ρ → 0 and σ → σ , where σ is precisely the first component of the center at Q µ (which is in the diagonal of the first quadrant).This shows that (30) and (31) cannot occur simultaneously, which yields to the desired contradiction and finishes the proof of the result.
The following is our second example of application of Theorem A. In this case the family of Kolmogorov's systems is five-parametric and to the best of our knowledge it has not been studied previously.Theorem 3.2.Consider the family of Kolmogorov differential systems where µ = (a, b, c, p, q) ∈ R 5 with c > 0, p > 0, q > 0 and b < 2 √ pq and let us fix any µ 0 = (a 0 , b 0 , c 0 , p 0 , q 0 ).
Then there exists a unique singular point Q µ in the first quadrant, which is either a center, a focus or a node.Moreover, compactifying X µ to the Poincaré disc, the boundary of the first quadrant is a polycycle Γ such that: (c) The return map of X µ0 along Γ is the identity if, and only if, p 0 − c 0 q 0 = 2c 0 q 0 a 0 − (c 0 q 0 − c 0 + 1)b 0 = 0.In this case Q µ0 is a center with first integral , which foliates the first quadrant.Moreover Γ is the outer boundary of its period annulus and, in addition, Cycl (Γ, X µ0 ), X µ = 1.Remark 3.3.In contrast to the family of Kolmogorov's cubic systems studied in Theorem 3.1, for the family in Theorem 3.2 there exist parameters µ 0 with d 1 (µ 0 ) = 0 and d 2 (µ 0 ) = 0, so that Cycl (Γ, X µ0 ), X µ = 1, and satisfying additionally that the unique singular point Q µ0 in the first quadrant is a non-degenerate node.Hence, for appropiate µ ≈ µ 0 we will have a limit cycle γ µ with a non-monodromic singular point Q µ as unique singularity in its interior.For instance, the choice µ 0 = (−800.01,−900.99999,1000, 1, 0.001) leads to this phenomenon with Q µ0 = (0.1, 10).A similar occurrence is observed in [2, p. 203] to take place in the family of cubic Lienard systems studied in [7].
Moreover the hyperbolicity ratios are Then Γ is a polycycle and by applying the Poincaré-Bendixson theorem we deduce the existence of at least one singular point of X µ in the first quadrant.We claim that there exists exactly one.In order to show this we suppose that (υ 1 , υ 2 ) is a singular point of X µ in the first quadrant and solve f (υ 1 , υ 2 ; µ) = 0 and g(υ 1 , υ 2 ; µ) = 0 for a and b as a function of c, p, q, υ 1 and υ 2 .In doing so we obtain that The substitution of these values in f + cg, which is homogeneous of degree 2 in x and y, yields It is clear then that the vanishing of the above numerator provides the possible values of r such that X µ has a singular point at the straight-line y = rx, namely, 1 ) vanishes at x = υ 1 > 0, it must have another real zero, which has to be negative.Therefore X µ has exactly one singular point in the first quadrant and exactly one singular point in the third quadrant, showing in particular the validity of the claim.An easy computation shows that the determinant of the Jacobian of X µ at Q µ = (υ 1 , υ 2 ) is equal to 2υ 2 1 (cq + c + 1) + 2υ 2 2 (p + c + 1) > 0, so that it can be a center, a focus or a node.
So far we have proved that the first assertion in the statement is true.Let us turn to the proof of the assertions in (a), (b) and (c).The first one follows from (a) in Theorem A because The second assertion will follow by applying (b) and (c) in Theorem A. In this regard let us recall that The explicit integration of these functions leads to several cases depending on the parameters.To avoid this we note that mz It is clear that the same formula holds for log L 21 (1) replacing p, q and n by q, p and n , respectively.On account of this and the fact that, from (36), mb + 2n q = −mb − 2np, we get From this expression we conclude that there exists s 0 > 0 and an open neighbourhood U of ν = 0 5 such that R 1 (s; ν) that has at most one zero on (0, s 0 ), counted with multiplicities, for all ν = (ν 1 , ν 2 , ν 3 , ν 4 , ν 5 ) ∈ U with ν 2 1 + ν 2 2 = 0, which implies that Cycl (Γ, X µ0 ), X µ 1.The proof of this follows exactly as we argue to show the same fact in Theorem 3.1, cf.(24), and it is omitted for brevity.Finally the fact that Cycl (Γ, X µ0 ), X µ 1 follows taking µ 1 ≈ µ 0 with p 1 − c 1 q 1 = 0 and 2c 1 q 1 a 1 − (c 1 q 1 − c 1 + 1)b 1 = 0, and applying the assertion in (b).This completes the proof of the result.Remark 3.4.In order to prove Theorem 3.2 it is only necessary to compute the functions d 1 and d 2 in Theorem A, which give the conditions for cyclicity 0 and 1, respectively.Let us explain that, as a matter of fact, we computed the function d 3 as well, realizing that it vanishes when d 1 = d 2 = 0.It was this fact that lead us to investigate if the return map along the polycycle is the identity in that case.For completeness let us explain succinctly the computations that involve the obtention of d 3 for the Kolmogorov's family considered in Theorem 3.2.Recall, see (e) in Theorem A, that In order to proceed with the computation of M1 (1 + ηx 2 ) r x −α−1 dx = − 1 α 2 F 1 (−r, −α/2; 1 − α/2; −η) for all α < 0, where in the first equality we apply (b) in Proposition 2.5 with k = 0 and in the second one we use the equality in [1, 15.3.1] to express the definite integral as a hypergeometric function.In principle the above equality is only true provided that α < 0. However its validity can be extended to any α / ∈ N thanks to the meromorphic properties of the functions 2 F 1 and Ĵ stablished, respectively, by [17,Lemma B.2] and (d) in Proposition 2.5.Consequently, thanks to this observation and applying twice the above formula, we get M1 (1/λ 1 , 1) = − aq + b q ϕ 1 (c, q) + cb − (c + 1)a 2 − q ϕ 2 (c, q), where ϕ 1 (c, q) := 2 F 1 (3 − q)/2 − 1/(2c), −q/2; 1 − q/2; −c , ϕ 2 (c, q) := 2 F 1 (3 − q)/2 − 1/(2c), 1 − q/2; 2 − q/2; −c .
This is an equation for b, c and q that involves four hypergeometric functions.Surprisingly enough it turns out, by applying the formula in [1, 15.3.7], that the function on the left hand side of the above equation is identically zero.In other words, d 1 (µ 0 ) = d 2 (µ 0 ) = 0 implies d 3 (µ 0 ) = 0.

1 ΓFigure 1 :
Figure 1: Placement of the hyperbolic saddles and the polycycle Γ in the Poincaré disc.

L 11 (
by Theorem 2.6, this lower bound follows by applying (b) in Theorem A.

Definition 2 . 2 .
is called the Ecalle-Roussarie compensator.Consider an open subset U ⊂ Ŵ ⊂ R N +1 .We say that a function ψ(s; ν) belongs to the class C ∞ s>0 (U ) if there exist an open neighbourhood Ω of

Figure 2 :
Figure 2: Definition of the Dulac map D( • ; ν), where ϕ(t, p; ν) is the solution of X ν passing through the point p ∈ U at time t = 0.

Proposition 2 . 5 .
Let us consider an open interval I of R containing x = 0 and an open subset U of R M .

Figure 3 :
Figure 3: Auxiliary Dulac maps for the definition of D = D 2 • D 1 − D 3 in Theorem 2.6.The return map in Theorem A, with respect to the transverse section Σ 1 , would be R = D −1 3 • D 2 • D 1 = D −1 3 • D + Id.