Fine asymptotics of profiles and relaxation to equilibrium for growth-fragmentation equations with variable drift rates

We are concerned with the long-time behavior of the growth-fragmentation equation. We prove fine estimates on the principal eigenfunctions of the growth-fragmentation operator, giving their first-order behavior close to 0 and $+\infty$. Using these estimates we prove a spectral gap result by following the technique in [Caceres, Canizo, Mischler 2011, JMPA], which implies that solutions decay to the equilibrium exponentially fast. The growth and fragmentation coefficients we consider are quite general, essentially only assumed to behave asymptotically like power laws.


Introduction
In this paper we are interested in the long-time behavior of the growth-fragmentation equation, commonly used as a model for cell growth and division and other phenomena involving fragmentation [10,6,4]. There are a number of works which study the existence and other properties of the first eigenfunctions (also called profiles) of the growth-fragmentation operator and its dual [7,2,3] and the convergence of solutions to equilibrium [5,11,9,8,1,12]. These eigenfunctions are fundamental since they give the asymptotic shape of solutions (i.e., the stationary solution of the rescaled equation) and a conserved quantity of the time evolution. However, precise estimates on their behavior close to 0 and +∞ are usually not given, are very rough, or are restricted to a particular kind of growth or fragmentation coefficients. Our first objective is to give accurate estimates on the first eigenfunctions, valid for a wide range of growth and fragmentation coefficients which include most cases in which they behave like power laws. We give, in most cases, the first-order behavior of both first eigenfunctions (of the growth-fragmentation operator and its dual); detailed results are given later in this introduction. Our second objective is to use these estimates to show that the growthfragmentation operator has a spectral gap (in a certain natural Hilbert space) for a wide choice of the coefficients, which is interesting because it readily implies exponential convergence to equilibrium of solutions. For this we follow the techniques in [1], which require careful estimates on the profiles which were previously available only for particular growth rates (constant and linear). Our results on exponential convergence to equilibrium are valid for general coefficients behaving like power laws, improving or complementing known results applicable to constant or linear total fragmentation rates [5,11,1]. However, our results still impose some restrictions on the fragment distribution (which must be bounded below) and the decay of the total fragmentation rate for small sizes.
Let us introduce the equation under study more precisely and state our main results. The growth-fragmentation equation is given by (τ g t )(0) = 0 (t ≥ 0), (1b) The unknown is a function g t (x) which depends on the time t ≥ 0 and on x > 0, and for which an initial condition g in is given at time t = 0. The positive function τ represents the growth rate. The symbol L stands for the fragmentation operator (see below), and λ is the largest eigenvalue of the operator g → −∂ x (τ g) + Lg, acting on a function g = g(x) depending only on x.
The main motivation for the study of equation (1) is the closely related with the same initial and boundary conditions. Solutions of the two are related by n t (x) = e λt g t (x), and n t represents the size distribution at a given time t of a population of cells (or other objects) undergoing growth and division phenomena. The population grows at an exponential rate determined by λ > 0, called the Malthus parameter, and approaches an asymptotic shape for large times. Equation (1) has a stationary solution and is more convenient for studying its asymptotic behavior, which is why it is commonly considered. Of course, results about (1) are easily translated to results about (2) through the simple change n t (x) = e λt g t (x).
The fragmentation operator L acts on a function g = g(x) as where the positive part L + is given by The coefficient b(y, x), defined for y ≥ x ≥ 0, is the fragmentation coefficient, and B(x) is the total fragmentation rate of cells of size x > 0. It is obtained from b through The eigenproblem associated to (1) is the problem of finding both a stationary solution and a stationary solution of the dual equation, this is, the first eigenfunction of the growth-fragmentation operator g → −(τ g) ′ + L(g) and of its dual ϕ → τ ϕ ′ + L * (ϕ). If λ is the largest eigenvalue of the operator g → −(τ g) ′ + Lg, the associated eigenvector G satisfies Of course, the eigenvector G is an equilibrium (i.e., a stationary solution) of equation (1). The associated dual eigenproblem reads where This dual eigenproblem is interesting because φ gives a conservation law for (1): In this paper we always denote by G, φ and λ the solution to (3) and (4).
In the rest of this introduction we describe the assumptions used throughout the paper and state our main results. In Section 2 we give the proof of our estimates on the stationary solution G, and Section 3 is devoted to estimates of the dual eigenfunction φ. Our results on the spectral gap of the growth-fragmentation operator are proved in Section 4, and we also include two appendices: one, Appendix A, on different approximation procedures that may be used for G and φ, and which are more convenient in some of our arguments; and Appendix B, which gives asymptotic estimates of some of the expressions involving the positive part L + of the fragmentation operator, and are used for our large-x estimates of G.

Assumptions on the coefficients
For proving our results we need some or all of the following assumptions. First of all, we assume that the fragmentation coefficient b is of self-similar form, which is general enough to encompass most interesting examples and still allows us to obtain accurate results on the asymptotics of G and φ: for some locally integrable B : (0, +∞) → (0, +∞), and some nonnegative finite measure p on [0, 1] satisfying the mass preserving condition (When writing the integral of a measure it is always understood that the integration limits are included in the integral.) The measure p gives the distribution of fragments obtained when a particle of a certain size breaks. Remark 1.1. Define, for k ≥ 0, the moment We have from Hypothesis 1.1 that π 1 = 1 and π 0 > 1. Physically, π 0 represents the mean quantity of fragments produced by the fragmentation of one particle. Because of the strict inequality π 0 > π 1 , one can deduce that p is not concentrated at z = 1 (i.e. p = π 0 δ 1 ). As a consequence we have that π k < 1 if k > 1 and π k > 1 if k < 1.
Our next assumption says that the growth rate and total fragmentation rate have a power-law behavior for large and small sizes: Hypothesis 1.3 (Asymptotics of fragmentation and drift rates). Assume that for some constants We also impose the conditions γ − α + 1 > 0, (11) to ensure the existence of a solution to the eigenproblem (see [2,7]).
Likewise, we impose that the distribution of small fragments behave like a power law: Hypothesis 1.4 (Behavior of p close to 0). There exist p 0 ≥ 0 and µ > 0 such that with the condition Remark 1.2. When p 0 > 0 condition (12) is the same as as z → 0. We prefer to write it as given in order to allow for p 0 = 0, which is usually not allowed in the notation (14). For instance, if p(z) is equal to 0 in a neighborhood of 0 (such as for the mitosis case, see below), then (12) holds with p 0 = 0, but (14) does not make sense.
To find the asymptotic behavior of the function G when x → ∞ the following hypothesis will also be needed. Hypothesis 1.5 (Asymptotics to second order). Assume that τ is a C 1 function and that, for some δ > 0 and ν > −1, Finally, to prove the entropy-entropy dissipation inequality, we will need an additional restriction on the fragmentation coefficient. It essentially says that p is uniformly bounded below by some constant p > 0, and that it behaves like a constant at the endpoints 0 and 1: Hypothesis 1.6. There exist positive constants p, p 0 , p 1 > 0 such that and α 0 < 2 (which is nothing but condition (13) in the case µ = 1).
The reader may check that [2, Theorem 1], which gives existence and uniqueness of G, φ, λ satisfying (3) and (4) is applicable under Hypotheses 1.1-1.4. We assume at least these hypotheses throughout the paper in order to ensure the existence of profiles.
Self-similar fragmentation The previous case with τ (x) = x is referred to as the self-similar fragmentation equation. It is closely related to the fragmentation equation ∂ t g t = L(g t ) (see [3,1]).
Mitosis Cellular division by equal mitosis is modeled by a distribution of fragments p concentrated at a size equal to one half: p(z) = 2 δ z= 1 2 . This measure p satisfies Hypothesis 1.4 with p 0 = p 1 = 0 (the value of µ, ν being irrelevant). In order to make the theory work, one has to choose B and τ such that the rest of Hypotheses are satisfied. For instance, B(x) = x γ and τ (x) = x α with γ − α + 1 > 0 (and then defining b(x, y) through (5)) are valid choices for the same reasons as before.

Summary of main results
Estimates on the profiles. We describe the asymptotics of the profile G and give accurate bounds on the eigenvector φ. Define where the parameters are the ones appearing in the previous hypotheses. In Section 2.2 we prove the following result, which improves previous estimates of the profile G given in [3,1,2] Theorem 1.7. Assume Hypotheses 1.1-1.5. There exists C > 0 such that This result works for all the examples given before. For all of them, it shows that the profile G decays exponentially for large sizes, with a precise exponential rate given by Λ(x). We observe that Λ(x) behaves like x γ+−α+1 (with γ + := max{γ, 0}), which is always a positive power of x. There are some observations about this that match intuition: the equilibrium profile decays faster when the total fragmentation rate is stronger for large sizes, and it decays slower when the growth rate is larger for large sizes. Also, it is interesting to notice that Λ does not depend on the fragment distribution (this is, p), but only on the total fragmentation rate B.
The additional power x ξ−α which gives a correction to the exponential behavior, in turn, depends only on the behavior of the distribution of fragments p(z) close to z = 1, this is, on fragments of size close to the size of the particle that breaks. In the mitosis case, for example, ξ = 0 since we obtain no fragments of similar size when a particle breaks.
The behavior of G(x) for x close to 0 depends on the power α 0 from Hypothesis 1.3 and the distribution of small fragments that result when a particle breaks. The following result is proven in Section 2.3: This shows that G is (roughly) more concentrated close to 0 the weaker the growth is for smaller sizes; and is less concentrated when there are fewer smaller fragments resulting from breakage. This result includes cases in which G(x) blows up as x → 0, cases in which it behaves like a constant, and cases in which it tends to 0 like a power. We recall that the boundary condition is τ (x)G(x) → 0 as x → 0, which is always ensured by µ > 0 from Hypothesis 1.4.
For the profile φ we derive the following estimates, proved in section 3, by the use of a maximum principle (Lemma 3.2): Theorem 1.9. Assume Hypotheses 1.1-1.4. If γ > 0, there are two positive constants C 1 and C 2 such that If γ < 0 and under the additional assumption that µ = 1 and p 0 > 0 in Hypothesis 1.4, there exist two positive constants C 1 and C 2 such that Estimates of φ are significantly harder than those of G, and they have to be obtained through comparison arguments. To our knowledge, this is the first result in which φ can be bounded above and below by the same power (except for the cases in which φ can be found explicitly). This improves the results in [1] also in that it is valid for a general power-law behavior of τ .
We do not include the case γ = 0 in the above theorem (this is, B(x) asymptotic to a constant as x → +∞), but we remark that in the case of B(x) equal to a constant (and with the very mild condition that b(x, y) dy is equal to a constant independent of x), then φ ≡ 1. The case τ (x) = τ 0 x is also explicit: in that case, λ = τ 0 and φ(x) = Cx for some number C > 0.
As for the behavior at zero, we prove the following result: We remark that the behavior of Λ(x) for small x is determined by whether (c) If α 0 > 1, then φ(x) decays exponentially fast as x → 0.
Spectral gap. The estimates of the previous theorems allow us to prove a spectral gap inequality. The general relative entropy principle [8,9] applies here and we have where H is any function and we denote In the particular case of H( and obtain that d dt The next result shows that H is in fact bounded by a constant times D: Theorem 1.11. Assume that the coefficients satisfy Hypotheses 1.1-1.6 with one of the following additional conditions on the exponents γ 0 and α 0 : • either α 0 > 1, • or α 0 = 1 and γ 0 ≤ 1 + λ/τ 0 , • or α 0 < 1 and γ 0 ≤ 2 − α 0 .
Consider also that we are in the case γ = 0. Then the following inequality holds for some constant C > 0 and for any nonnegative measurable function g : Remark that in general we do not know the value of the eigenvalue λ which appears in the assumption on γ 0 for the case α 0 = 1. Nevertheless in the case of the self-similar fragmentation equation (i.e. τ (x) ≡ τ 0 x) we know by integration of equation (3a) multiplied by x that λ = τ 0 and the condition on γ 0 becomes γ 0 ≤ 2. Thus Theorem 1.11 includes the result of the first part of [1, The main restrictions on the coefficients needed for Theorem 1.11 to hold are the following. First, we require Hypothesis 1.6, which says that the fragment distribution p should be bounded below. Consequently, this does not include the mitosis case and other cases in which the fragment distribution has "gaps"; we refer to [5] for a proof that exponential decay does hold in that case, at least for a constant total fragmentation rate. Second, the exponent γ 0 cannot be too large in order to ensure that the term b(x, y) which appears in the entropy dissipation is not too small and can be bounded below by our methods. (An exception to this is the case α 0 > 1: in this case φ(x) decays exponentially fast as x → 0, and this allows us to remove the upper bound on γ 0 .) This restriction on γ 0 might be a shortcoming of the arguments we are using; we do not know if there is a spectral gap when it is removed.
On the other hand, it is remarkable that Theorem 1.11 does not place any restrictions on the behavior of the fragmentation or growth coefficients for large sizes. This is a significant improvement over [1], where the behavior at 0 and +∞ of the coefficients was taken to be the same power of x, and results were restricted to the cases in which τ is constant or linear.
2 Estimates of the profile G 2.1 Estimates of the moments of G When Hypothesis 1.3 is satisfied, we define where γ + = max{0, γ}. Remark that, for γ ≥ 0, we have the relation Proof. As usual, we carry out a priori estimates which can be rigorously justified by an approximation procedure (such as the truncated equation (50)). As G is integrable, it is enough to prove the convergence of the above integral on (x 0 , +∞) for a sufficiently large x 0 > 0. Hence, take any x 0 > 0, multiply (3a) by x 1−m e Λ(x) with m > 1 + ξ and integrate on (x 0 , +∞) to obtain where we have done an integration by parts on the last term.
We first consider the case ξ > 0 (this is, γ ≥ 0 and ν = 0). From Equation (9) we have that for any ǫ > 0 there exists x 0 > 0 such that (27) and, applying Lemma B.3, also such that (28) (observe that we have used γ ≥ 0 and ν = 0 here). Using (27) and (28) we obtain from (26) that this gives a bound for the integral on the left hand side. If m > 1 + ξ we can always choose ǫ small enough for this to be true, because of relation (25), and it proves the result.
The remaining case is ξ = 0, this is, In this case we have to substitute (28) by the following, according to Lemma B.3: Thus the exponent of y on the right hand side of (29) is strictly smaller than α − m, so we can always find x 0 large enough so that Using this and (27) in (26) we may follow a similar reasoning as before to obtain the result.

Asymptotic estimates of G as x → +∞
In this section we prove Theorem 1.7 by using the moment estimates in Section 2.1.
Proof of Theorem 1.7. We divide the proof in two steps: Step 1: proof that the limit is finite. Again, we carry out a priori estimates on the solution which can be fully justified by using the approximation (50). Let us first prove that x α−ξ G(x)e Λ(x) has a finite limit C ≥ 0 as x → +∞, and later we will show that C > 0. We use equation (3a) to obtain Let us show that the right hand side of this last expression is integrable on (x 0 , +∞) for some x 0 > 0. Once we have this the result is proved, since then x α−ξ G(x)e Λ(x) must have a limit as x → +∞. Integrating the right hand side we obtain: We just need to show that the parenthesis is of the order of e Λ(x) x α−ξ−1−ǫ for some ǫ > 0, and then Lemma 2.1 shows that the above integral is finite.
The case ξ = 0. In this case we have from Lemma B.3 Using the same reasoning as at the end of the proof of Lemma 2.1 we have that, which then shows that the right hand side of (30) is finite due to Lemma 2.1.
Step 2: proof that C > 0. In order to show that C > 0 in (18) set F (x) := τ (x)G(x)e Λ(x) and obtain the following from (3a): In particular, F is nondecreasing, and this is enough to conclude in the case ξ = 0 (since then τ (x)G(x)e Λ(x) must converge to a positive quantity, so the same must be true of x α G(x)e Λ(x) ). In the case ξ > 0 we may bound e −Λ(y) dy, which implies that Due to equation (16) we have . Using this, and due to Lemma B.3, , and C 1 ∈ R some real number. As a consequence, which shows that lim x→+∞ F (x)x −ξ (which we know exists) must be strictly positive. This finishes the proof.

Asymptotic estimates of G as x → 0
Proof of Theorem 1.8. Define We know from [2] that F (x) → 0 when x → 0 and more precisely that F (x) ≤ C x µ . The derivative of F , as noted in (33), is Case α 0 < 1 In this case, Λ(x) → Λ(0) < 0. Choose ǫ > 0 such that p is a function on [0, ǫ) (the fact that this can be done for small enough ǫ is implicit in Hypothesis 1.4), and call p * = p 1 [0,ǫ] . Then, from Hypothesis 1.4, with the above convergence being pointwise in y. We may additionally choose ǫ ∈ (0, 1) and C > 0 such that Now we write For the first term in the r.h.s., we use that B(y) ∼ y→0 B 0 y γ0 and G(y) ≤ Cy µ−α0 (see [2]) to write and conclude that it tends to zero when x → 0 since γ 0 + 1 − α 0 > 0. For the second term, we use (34) and (35) to obtain by dominated convergence This limit is strictly positive and finite, since G(y) ≤ Cy µ−α0 and γ 0 − α 0 > −1.
Finally, we have deduced that there is a positive constant C > 0 such that Case α 0 ≥ 1 In this case we necessarily have γ > 0 and As a consequence, following a similar reasoning as for the previous case, we have due to l'Hôpital's rule. This finally gives G(x) ∼ x→0 C x µ−1 .
3 Estimates of the dual eigenfunction φ 3.1 Asymptotic estimates of φ as x → 0 We first give the proof of Theorem 1.10, which is rather direct: Proof of Theorem 1.10. Define This function is decreasing since it satisfies Moreover it is a positive function, so to prove Theorem 1.10 we only have to prove that ψ is bounded at x = 0. Consider, for η > 0, τ η as defined in the approximation procedure (see (49) in Appendix A). Then denote by φ η , Λ η and ψ η the corresponding functions. First we know from [2] that φ η converges locally uniformly to φ when η → 0. We have, for η > 0, that −Λ η (x) = 1 x λ+B(y) τη(y) dy is bounded at x = 0 and this is why it is useful to consider this regularization. We have for any x 0 > 0, Now, because B τ is integrable at x = 0, we can choose x 0 > 0 such that π 0 x0 0 B(y) τ (y) dy = ρ < 1 and we obtain So ψ η is uniformly bounded when η → 0 and thus the limit ψ(x) is bounded.

A maximum principle
For finding the bounds on the dual eigenfunction at x → +∞ we use comparison arguments, valid for each truncated problem on [0, L] (see Appendix A for details on the truncation). Then we pass to the limit, as the bounds we obtain are independent of L. The function φ L (x) satisfies the equation where S is the operator given by defined for all functions w ∈ W 1,∞ (0, L) and for x ∈ (0, L). This operator satisfies where G L is the eigenfunction of the truncated growth-fragmentation operator. We recall the concept of supersolution: Maximum principles were a powerful tool for proving the existence of sub and supersolutions for the growth-fragmentation models as in [7,2]. For our case, we recall the maximum principle given in [2]. Proof. We start from the fact w is a supersolution on (A, L) Testing this equation against 1 w≤0 we obtain on (A, L) Extend f by zero on [0, A]. Since w − (x) = 0 on [0, A] by assumption, the latter inequality holds true on (0, L) and it writes Testing this last inequality against G L , we obtain using (36) Because f and G L are positive on (A, L), this is possible only if 1 w≤0 ≡ 0 on (A, L) and it ends the proof.

Asymptotic estimates of φ as x → +∞
Now we prove the results concerning the asymptotic behavior of φ(x) when x → +∞, Theorem 1.9. For these results, we still assume that Hypotheses 1.1-1.4 are satisfied and, in the case γ < 0, we additionally assume that µ = 1 and p 0 > 0 (so that p(z) − −− → z→0 p 0 > 0). We recall that Hypothesis 1.3 says that B(x) behaves like a γ-power of x and τ (x) like an α-power of x, with γ + 1 − α > 0.
Proof of Theorem 1.9. The proof is done in two cases, and each case is proved in two steps. In the first step we give particular supersolutions and prove the upper bound, and in the second one we do the corresponding for lower bounds.
Now we prove that there exists C > 0 such that First we can choose C such that v(x) = Cx + 1 − x k is bounded below by a positive constant. Moreover we take an approximation φ L of φ such that φ L (L) = 0. Then, choosing K > 0 large enough, we have that Kv(x) > φ(x) on [0, A] because φ is bounded uniformly in L on [0, A], and Kv(L) = KCL + K − KL k > 0 for L large enough. So, using the maximum principle and the previous lemma, we obtain that Step 2: Lower bounds. For the lower bounds we first prove that v(x) := x + x k − 1 is a subsolution for max(0, 1 − γ) < k < 1. The idea is to use x k to transform x which is a supersolution into a subsolution.
For γ > 0, there exists C > 0 such that We know that φ is positive, so for C small enough, Moreover, taking an approximation φ L of φ such that φ L (L) = L, we have Cv(L) − φ(L) < 0 for C < 1 and L large enough. Finally we use the lemmas on the maximum principle and the subsolution to conclude that there exists C > 0 such that and the result follows.
Step 1: Upper bounds. We start by proving that for any η > −γλ and to estimate the last term in the r.h.s. we proceed similarly as in the proof of Theorem 1.8. We write, for ǫ ∈ (0, 1), Then, choosing ǫ such that (35) is satisfied (for this we use Hypothesis 1.4), we obtain by dominated convergence from (34) that On the other hand we have Since γ < 0, we obtain Now, we claim that there exist C > 0 and ǫ > 0 such that The proof of this fact follows from the maximum principle and taking the an approximation φ L of φ such that φ L (L) = 0 and that v(x) is a supersolution.
Step 2: Lower bounds. For the lower bounds we define Then for ǫ < λγ(γ−1) and, reasoning as in Step 1, we obtain that Finally, there exist C > 0 and ǫ > 0 such that Again, choosing an approximation φ L of φ such that φ L (L) = L, the proof uses the maximum principle and the fact that v(x) is a subsolution.

Entropy dissipation inequality
As it was seen in [8,9,5,1] the general relative entropy principle applies to solutions of (1). We remind that we use the entropy H[g|G] defined in (21), with dissipation D[g|G] given by (22). We recall that For the proof of the entropy inequality we will use [1, Lemma 2.2] with ζ(x) ≡ 1.
We need to check its hypotheses.
Proof. The bound (37) on G is true because of Theorem 1.7 and Theorem 1.8 with µ = 1. For the second bound, we have due to l'Hôpital's rule and using Theorem 1.7 Finally, (39) is a consequence of Theorem 1.9.
Let G and φ be the stationary profiles for the problems (3) and (4). Then we can choose constants K, M > 0 and R > 1 such that the profiles φ and G satisfy Proof.
At this point, we have all the tools to prove the entropy -entropy dissipation inequality.
Proof of Theorem 1.11. From [1, Lemma 2.1] one can rewrite the entropy as follows If one looks at the integrand, one realizes that D and D 2 have both φ(x) and G(y) as a common terms. So we would like to compare and check that We will denote by C any constant depending on G, φ, K, M , or R, but not on g. We now distinguish two cases.
Case γ < 0. The relation (45) is satisfied due to (42). So we can compare pointwise the integrands of D 2 [g|G] with D[g|G] and the inequality (23) holds.
Case γ > 0. For proving the case γ > 0 we follow the same argument as in [1,Theorem 2.4]. We start by rewriting D 2 [g|G] as follows: y > x , y ≤ RM or y < Rx} and A 2 = A c 1 . For the first term and thanks to (40) one has For the other term, what we have is where in the first inequality we applied [1, Lemma 2.2] with the bounds given in Lemma 4.2 and for the second one we used (41). The proof concludes by gathering (46) and (47).

A Approximation procedures
To prove the estimates on the dual eigenfunction φ, we use a truncated problem. More precisely, we use alternatively one of the following ones, which differ only in their boundary condition The following lemma ensures that these truncations converge to the accurate limit when L → +∞.
Lemma A.1. There exists L 0 > 0 such that for each L ≥ L 0 the problem (48) has a unique solution (λ L , φ L ) with λ L > 0 and φ L ∈ W 1,∞ loc (R + ). Moreover we have Proof. We start with the case φ L (L) = 0 by following the method in [2]. Define for η > 0 Then consider for ǫ > 0 and L > 0 the truncated (and regularized) eigenvalue problem on [0, L] (50) Notice that in this problem, the eigenelements (λ L , G L , φ L ) depend on η, and ǫ and should be denoted (λ η,ǫ L , G η,ǫ L , φ η,ǫ L ). We forget here the superscripts for the sake of clarity.
The existence of a solution to Problem (50) is proved in the Appendix of [2] by using the Krein-Rutman theorem. Then we need to pass to the limit η, ǫ → 0. The uniform estimates in [2] allow to do so, provided that λ η,ǫ L is positive for all η, ǫ. In [2] this condition is ensured for L large enough under the constraint that ǫL is a fixed constant, which means that L = L(ǫ) tends to +∞ as ǫ → 0.
Here we want to pass to the limit ǫ → 0 for a fixed positive value of L. For this we prove the existence of a constant L 0 > 0 such that λ η,ǫ L > 0 for all η, ǫ ≥ 0 and all L ≥ L 0 .
Assume by contradiction that λ L ≤ 0. Then we have by integration of the direct eigenequation between 0 and x < L We assume that b(y, x) = B(y) y p x y with 1 0 p(h) dh = π 0 > 1. Thus, for p bounded, there exists s ∈ (0, 1) such that

B(y)G(y) dy
and finally, by integration on [sL, L], We have from Hypothesis 1.3 that which contradicts (51). Finally, λ L > 0 for all L ≥ L 0 := A s .
We have proved the existence of solution for Problem (48) in the case φ L (L) = 0 and we know that Now we use this result to treat the cases φ L (L) = δ > 0 and φ L (L) = δL. Since δ > 0, we can prove by using the Krein-Rutman theorem the existence of a solution to Problem (48). To prove the convergence of λ L to λ, we integrate the equation on φ L multiplied by G and we obtain We know from estimates on G that τ (L)G(L)L → 0 when L → +∞, which ensures the convergence of λ. Because λ > 0, it also ensures the existence of L 0 such that λ L > 0 for L ≥ L 0 , which allows to prove the convergence of φ L to φ locally uniformly (see [2] for details).

B Laplace's method
Laplace's method (see [13, II.1, Theorem 1] for example) gives a way to calculate the asymptotic behavior of integrals which contain an exponential term with a large factor in the exponent. We give here a result of this kind, with conditions which are adapted to the situation encountered in Section 2.
Some remarks on the conventions used above are in order. Although g is a measure we denote it as a function in the expressions in which it appears. For example, integrals in which g appears should be considered as integrals with respect to the measure g. Also, in equation (52), it is understood that close to x 0 the measure g is equal to a function, and the asymptotic approximation (52) holds.
Proof. First of all, by translating g and h we may consider always that x 0 = 0. We may also assume that h(x 0 , D) = 0 for all D ≥ D 0 , as otherwise one obviously obtains the additional factor e −Dh(x0,D) .
An important part of the argument is based on the observation that if one excludes a small region close to 0, then the rest of the integral decreases fast as D → +∞: from (53) and (55) we deduce that for some ρ > 0 Then, for D ≥ D 0 and 0 < ǫ < 1 we have from (57): due to (54). If we take ǫ := D − 1 2ω then for all D > D 0 we have This quantity decreases faster than any power of D as D → +∞.
For the remaining part of the integral, since we are integrating in a region which is closer and closer to 0 it is easy to see due to (52) and (53) that for all ǫ > 0 there exists D ǫ > 0 such that (58) for all D > D ǫ . Through the change of variables z = xD 1/ω we see that where the '∼' sign denotes asymptotics as D → +∞. Carrying out a similar calculation for the last integral in (58) and letting ǫ → 0 we deduce our result.
This shows the result.
We now use this to prove an estimate which is needed in Section 2: Lemma B.3. Assume Hypotheses 1.1-1.4 with p 1 > 0, and take k ∈ R. Then, If we also assume Hypothesis 1.5 and ν = 0 (and now we allow any p 1 ≥ 0) then there is ǫ > 0 such that Proof. We call p * and p * , respectively, the parts of the measure p on the intervals [0, 1/2) and [1/2, 1], i.e., p * := p 1 [0,1/2) and p * := p 1 [1/2,1] . With this we break the integral we want to estimate in two parts: I(x) := x x0 e Λ(y) y k p y x dy = x x0 e Λ(y) y k p * y x dy + x x0 e Λ(y) y k p * y x dy =: I * (x) + I * (x).
The first part, I * , can be estimated by e Λ(y) y k p * y x dy ≤ e Λ(x/2) x x0 y k p * y x dy ≤ e Λ(x/2) max{x k , x k 0 } x x0 p * y x dy ≤ π 0 x e Λ(x/2) max{x k , x k 0 }.
Since we will show that I * (x) behaves as given in the statement, this term is of lower order (since Λ(x) is asymptotic to a positive power of x as x → +∞) and can be disregarded. For I * we make the change of variables z = y/x and denote D := x γ+−α+1 to obtain I * (x) := The property (52) is satisfied with g 0 = p 1 and σ = ν due to Hypothesis 1.4, and to show (53) we write (with asymptotics notation understood to be for z → 1 and D → +∞) which corresponds to h 0 = ζ, ω = 1 in Lemma B.1. For (54) we write 1 max{ x 0 x , 1 2 } exp (−D 0 h(z, D)) g(z) dz for some C 0 > 0 (which in particular depends on k), since x → Λ(x)/D = Λ(x)/x γ+−α+1 is bounded for x > 1. This gives (54). Obviously z → h(z, D) attains its minimum at z = 1, and (55) is a consequence of (63) and the fact that h(z, D) − h(1, D) is decreasing in z for all D.
We may then apply Lemma B.1 to obtain Since I * (x) was shown to be of lower order, this is enough to show (60).
Finally, in order to show (61), we have x x0 e Λ(y) y k p y x dy = x x0 e Λ(y) y k p y x − p 1 dy + p 1 x x0 e Λ(y) y k dy.