A simple proof of the Seneta-Heyde norming for branching random walks under optimal assumptions

We introduce a set of tools which simplify and streamline the proofs of limit theorems concerning near-critical particles in branching random walks under optimal assumptions. We exemplify our method by giving another proof of the Seneta-Heyde norming for the critical additive martingale, initially due to A\"id\'ekon and Shi. The method involves in particular the replacement of (truncated) second moments by truncated first moments, and the replacement of ballot-type theorems for random walks by estimates coming from an explicit expression for the potential kernel of random walks killed below the origin. Of independent interest might be a short, self-contained proof of this expression, as well as a criterion for convergence in probability of non-negative random variables in terms of conditional Laplace transforms.


Introduction
In the theory of branching processes, many limit theorems hold under so-called L log L-type moment conditions which are both sufficient and necessary. The most famous and historically first is the Kesten-Stigum theorem [KS66], which states in particular that a supercritical Galton-Watson process (Z n ) n≥0 grows asymptotically like W m n as n → ∞, where m is the mean of the offspring distribution and W is a random variable which is non-degenerate if and only if E[L log L] < ∞, where L is a random variable equal in law to the number of offspring of an individual. In fact, W is the limit of the martingale W n = m −n Z n and another statement of the theorem says that the martingale (W n ) n≥0 is uniformly integrable if and only if E[L log L] < ∞.
In the context of branching random walks, Lyons [Lyo97] has shown an analogous theorem for the so-called additive martingales arising naturally in this context. His theorem pertains mostly to those additive martingales whose parameter is in the so-called subcritical, or, using statistical physics terminology, high-temperature regime. The martingales in this regime describe the asymptotic growth of the particles in the bulk, i.e. in regions where the number of particles grows exponentially with time. In contrast, the last decade has seen considerable interest in the extremal particles in branching random walks, as well as in related models such as the two-dimensional Gaussian Free Field, Gaussian multiplicative chaos and characteristic polynomials of certain random matrices, see e.g. [Shi15,Zei16,RV14,Bov17] for fairly recent reviews. In these models and in the branching random walk in particular, it is well-known that the asymptotics of the extremal or near-extremal particles are strongly related to the so-called derivative martingale, which is the derivative of the additive martingale with respect to its parameter at its critical value. It is therefore natural to ask for sufficient and necessary L log L-type conditions for the convergence of the derivative martingale to a non-degenerate limit. Such a condition, together with proof of sufficiency, has been given by Aïdékon [Aïd13], with necessity subsequently established by Chen [Che15]. We will refer to it as Aïdékon's condition.
Aïdékon's condition arises generically in limit theorems concerning critical or near-critical particles in branching random walk. A prime example is the convergence in law of the recentered minimum [Aïd13]. Another important example is the so-called Seneta-Heyde norming of the additive martingale at critical parameter: it has been shown by Aïdékon and Shi [AS14] that this martingale, properly renormalized, converges in probability to the same limit as the derivative martingale, under Aïdékon's condition. Their proof has been adapted by He, Liu and Zhang [HLZ16] to cases where a certain variance σ 2 (defined in Equation (2) below) is infinite and by Aru, Powell and Sepúlveda [APS17] to an analogous result for Gaussian multiplicative chaos. The proofs of such limit theorems are often quite involved and technical. At their heart lies the so-called spine decomposition introduced by Lyons, Pemantle and Peres [LPP95] for Galton-Watson processes and adapted by Lyons [Lyo97] to the branching random walk. However, in order to cope with the extremal or near-extremal particles, quite involved truncation techniques have been introduced. A self-contained treatment of these techniques appears in Shi [Shi15]. These include the following: • Second moment estimates for quantities restricted to a certain subset of the particles and first moment bounds on the remainder by so-called peeling lemmas • ballot-type theorems for random walks conditioned to stay above certain space-time curves.
We emphasize that these techniques are not only quite technical, but also require the ad-hoc construction of certain quantities and sets of particles specifically tailored to the problem at hand. Other techniques using L p estimates can be used, but they require more restrictive assumptions, see e.g. Kyprianou and Madaule [KM15].
In the present article, we give a new proof of the Seneta-Heyde norming for the critical additive martingale in branching random walks, valid under optimal assumptions. We find this proof to be simpler and more streamlined than the original one by Aïdékon and Shi due to several technical improvements, amongst others: • the second moment estimates and peeling lemmas are replaced by certain truncated first moment estimates • the use of ballot-type theorems is replaced by other, softer methods, in particular bounds on the potential kernel of random walks killed below the origin.
We believe that our methods not only make the proof simpler, but that they are also more versatile in that they can be used as a general toolbox for proving limit theorems involving extremal or near-extremal particles of the branching random walk under optimal assumptions. In fact, the present article is part of a program that aims to establish limit theorems for branching random walks under non-standard assumptions and the tools developed here will be of use later in the program.

Definitions and results
We consider discrete-time real-valued branching random walks (BRWs), which can be informally described as follows. At time n = 0, we start with one initial particle at the origin. Then, at each time step n ≥ 1, every particle dies and gives birth to a random, possibly infinite number of particles distributed randomly on the real line. More precisely, the children of a particle at position x ∈ R are positioned at x + X 1 , x + X 2 , . . ., where the vector (X 1 , X 2 , . . .) follows a given law Θ, called the offspring distribution of the branching random walk. At each generation, the reproduction events are independent. Also, it is possible for several particles to share the same position. We further assume that the Galton-Watson process formed by the number of particles at each generation is super-critical, so that the system survives with positive probability. Formally, the branching random walk can be constructed as a stochastic process indexed by the Ulam-Harris tree U = n≥0 (N * ) n , where N * = {1, 2, . . .}. Particles are identified with vertices u ∈ U , i.e. words over the alphabet N * . The length of the word u, i.e. the generation of the particle, is denoted by |u|. The position of the particle u is denoted by X u . If the particle indexed by u does not exist, we set X u = +∞. The branching random walk described above then defines a process (X u ) u∈U taking values inR = R ∪ {+∞} and the offspring distribution Θ is a probability distribution on (R) N * . We further convene that mathematical expressions such as sums or products over the set {|u| = n} of particles at generation n are meant to ignore those u for which X u = +∞.
As mentioned above, we assume that the branching is super-critical, i.e.
Furthermore, we work in the so-called boundary case, meaning we suppose that The second equality in (1) implicitly assumes that the expectation is well-defined, which is automatically the case under the next assumption: Note that σ 2 < ∞ holds for example if E[e −θXu ] < ∞ for θ in a neighborhood of 1, and σ 2 > 0 holds as soon as the X u are not all equal to 0 or +∞, almost surely.
It is a well-known consequence of (1) and the branching property that the processes (W n ) n≥0 and (D n ) n≥0 , defined by are martingales with respect to the canonical filtration (F n ) n≥0 of the BRW, defined by F n = σ(X u , |u| ≤ n), see for exemple [BK04]. We will refer to (W n ) n≥0 as the additive martingale or Biggins' martingale, in reference to Biggins [Big77] and to (D n ) n≥0 as the derivative martingale. The second equality in (1) implies that W n converges almost surely to 0 [Lyo97]. In particular we have min |u|=n X u −→ ∞ a.s., as n → ∞. As for the derivative martingale, under assumptions (1) and (2), Biggins and Kyprianou [BK04] showed that D n converges a.s. to a finite nonnegative limit D ∞ . We introduce the following moment conditions: Here, we use the notations log x ∧ y = min(x, y). Under the additional assumptions (3) and (4), Aïdékon [Aïd13] proved that D ∞ > 0 a.s. on the event of survival of the branching random walk. Later, Chen [Che15] showed the converse result in the sense that if (1) and (2) hold then the limit is non-trivial if and only if conditions (3) and (4) hold.
As mentioned in the introduction, the main result of this paper is a new proof of the following result by Aïdékon and Shi [AS14]. We believe this proof to be simpler and more streamlined and the tools established in proving it will be useful to work with in other settings.
Theorem 1 (Aïdékon, Shi [AS14]). Assume (1), (2), (3) and (4) hold. We have The remainder of the article is organized as follows. Section 2 contains some preliminaries, namely the spinal decomposition and the many-to-one formula (Section 2.1) and some properties of a certain renewal function associated to this decomposition (Section 2.2). Section 3 contains the proof of Theorem 1, as well as a comparison with previous proofs of the same result. In particular, we present in this section the new technical tools going into our proof. The proof of one key ingredient (Proposition 6) is deferred to Section 4, where most of the work is done. Appendix A contains a formula for the potential kernel of random walks on a half-line in terms of associated renewal measures. Appendix B contains a criterion for convergence in probability of non-negative random variables using Laplace transforms. Appendix C contains a certain Tauberian-type lemma involving truncated first moments of a non-negative random variable.

The spinal decomposition
In this section, we recall a change of measure and an associated spinal decomposition of the BRW due to Lyons [Lyo97]. It will be helpful to allow the initial particle of the BRW to sit at an arbitrary position x ∈ R, this will be denoted by adding the subscript x as in P x and E x (if x = 0, the subscript is ignored). Then (W n ) n≥0 is still a non-negative martingale with W 0 = e −x . Define F ∞ = n≥0 F n . Using Kolmogorov's extension theorem, for every x ∈ R, there exists a probability measure P * x on F ∞ such that for every generation n ≥ 0, Following Lyons [Lyo97] we see P * x as the projection to F ∞ of a probability (also denoted P * x ) defined on a bigger probability space equipped with a so-called spine, a distinguished ray in the tree. We will denote the vertex on the spine at generation n by ξ n and its position by X ξn . The spinal BRW evolves as follows under P * x : • Start at generation 0 with one particle ξ 0 at position x.
• At generation n, all particles except ξ n reproduce according to the point process Θ and ξ n reproduces according to the size-biased reproduction law Θ * defined by • The spine at generation n + 1 is chosen amongst the children u of ξ n with probability proportional to e −Xu .
The following many-to-one formula can be deduced from Lyons [Lyo97], see also Aïdékon [Aïd13].
Proposition 2 (Many-to-one formula). For any x ≥ 0, n ∈ N = {0, 1, . . . } and every uniformly bounded family (H n (u)) u∈U of F n -measurable random variables, one has The spinal decomposition implies that the process (X ξn ) n∈N follows the law of a random walk under P * x (whose increments do not depend on x). Furthermore, Proposition 2 together with assumptions (1) and (2) shows that this random walk is centered and has finite positive variance : The many-to-one formula is a powerful tool which allows to express many quantities of the branching random walk in terms of the random walk (X ξn ).

The renewal function R
Throughout the article, we denote by R the renewal function associated to the strictly descending ladder heights of the random walk (X ξn ) n≥0 , as defined in Appendix A. Keep in mind that R(0) = 1 and that for all x < 0, R(x) = 0. The following lemma recalls some well-known results concerning the function R and the probability that the random walk (X ξn ) n≥0 stays non-negative in terms of R.
The first part of Lemma 3 is due to Kozlov [Koz76]. The second part can be found for example in Aïdékon and Shi [AS14] and is derived there as a consequence of Feller's renewal theorem and Sparre Andersen's identities for random walks. A different approach, in the spirit of Madaule [Mad16], is to use an invariance principle for the Doob R-transform of the random walk killed below the origin, see Section 4 for a precise definition of this process. This allows to identify the constant 2/π as the expectation of 1/Z 1 , where Z 1 is a 3-dimensional Bessel process at time 1, starting from 0.
As a direct consequence of (11) in Lemma 3, there exists a positive constant c 1 such that The following lemma is an easy consequence of (12) and will be used in Section 4.
Lemma 4. For all x, y ∈ R, we have 3 Outline of the proof of Theorem 1 We define the following quantities, for n, k 0 ≥ 0 : In other words, W ′ n and W ′′ n,k 0 are obtained from W n by removing from the sum the contribution of the particles going below the origin at some time k ≤ n or k 0 ≤ k ≤ n, respectively.
Remember that min |u|=n X u → ∞ almost surely on the event of survival. Thus Proposition 5. Let x ≥ 0. Then, as n → ∞, and for all n ≥ 0, where θ and θ ′ are the constants from Lemma 3.
Proof. We have : by the many-to-one formula.
Using Lemma 3 ends the proof.
In addition to the first moment estimate, we will need to show that the quantity √ nW ′ n concentrates sufficiently well around its expectation. Typically, one calculates (truncated) second moments for that. One key novel idea from this article, which greatly simplifies the calculations, is to replace this by estimates of a truncated first moment and, subsequently, of a conditional Laplace transform. In particular, this avoids the use of a peeling lemma such as Theorem 5.14 in [Shi15].
Proposition 6. For every ε > 0, there exists a positive function h, such that h(x) R(x) → 0 as x → ∞ and such that the following holds: for every x ≥ 0, we have Proposition 6 will be proven in the next section. Its proof relies on the spinal decomposition from Section 2.1 as well as on two ingredients the use of which we believe to be new in this context : • a lemma by Kersting and Vatutin [KV17] on the convergence of functionals of random walks conditioned to stay above the origin until a finite time n to analogous functionals of random walks conditioned to stay above the origin for all time.
• an explicit formula for the potential kernel of random walks killed below the origin.
We now have all the tools we need in order to prove Theorem 1.
Proof of Theorem 1. We start by giving the outline of the proof. We first show that for any πσ 2 D ∞ , as first n, then k 0 , tend to infinity. To do so we prove a lower and an upper bound, the latter relying crucially on Proposition 6. We then use Cantor diagonal extraction to make k 0 = k 0 (n) go to infinity with n and apply Lemma 15 to obtain convergence in probability of √ nW ′′ n,k 0 (n) to 2 πσ 2 D ∞ . We will then use (15) to conclude the proof. We now get to the details. First we show a lower bound on the conditional Laplace transform. For any λ > 0, we have By Proposition 5, for every x ∈ R, √ nE x [W ′ n−k 0 ] converges to θR(x)e −x as n → ∞ and is bounded from above by θ ′ R(x)e −x , with θ and θ ′ as in the statement of that proposition. Furthermore, using Proposition 2, one easily checks that |u|=k 0 R(X u )e −Xu is finite in expectation and therefore almost surely. By dominated convergence, we get almost surely √ n |u|=k 0 By Equations (16) and (17), we get almost surely, We now deal with the upper bound. We notice that for λ > 0 fixed and for any δ ∈ (0, 1), Fix λ > 0 and δ ∈ (0, 1), and take ε satisfying (19). We compute where the last inequality comes from the fact that W ′ n−k 0 ≥ 0 by definition. In the calculations that follow, we first apply inequality (19), then use linearity of expectation and finally the inequality 1 − x ≤ e −x : Using Fatou's lemma, we obtain : As seen above, by Proposition 5, the first term inside the summation on the right-hand side converges towards θR(X u )e −Xu as n → ∞, almost surely. Furthermore, by Proposition 6, there exists a positive function h, depending on ε, such that h(x) R(x) → 0 as x → ∞ and such that for every k 0 ∈ N, Altogether this gives almost surely, for every k 0 ∈ N, Since min |u|=k 0 X u → ∞ almost surely, we get and, as a consequence, again since min |u|=k 0 X u → ∞ almost surely, Together with (20), this shows that lim sup Letting δ → 1 in (21) and using (18), we finally get for any λ > 0, Using Cantor diagonal extraction, there exists a sequence (k 0 (n)) n≥0 (that goes to infinity as n → ∞) such that for any λ ∈ Q + = Q∩(0, ∞), E exp −λ √ nW ′′ n,k 0 (n) F k 0 (n) converges to exp −λ 2 πσ 2 D ∞ , almost surely as n → ∞. We now apply Lemma 15 in Appendix B with Y n = √ nW ′′ n,k 0 (n) and G n = F k 0 (n) to get: √ nW ′′ n,k 0 (n) −→ n→∞ 2 πσ 2 D ∞ in probability.
Finally, we use (15) to see that √ nW n converges to 2 πσ 2 D ∞ in probability as n → ∞.

Proof of Proposition 6
This section contains the proof of Proposition 6. As is customary in this context, the main idea is to use a decomposition of the particles along the children of the spine. More precisely, let G = σ (ξ k , X ξ k i , k ∈ N, i ∈ N * {∅}) be the σ-algebra containing information about the spine and its children. Applying first the many-to-one formula (Proposition 2) and then Markov's inequality, we have : Using Lemma 3, we obtain where θ ′ is the constant from Lemma 3. The proof of the following lemma is postponed.
Lemma 7 (Decomposition of W ′ n along the spine). We have, P * x a.s., that Applying Lemma 7, we obtain the following bound : where The proofs of the following two lemmas are postponed as well.
Lemma 8. For any fixed ε > 0 and x ≥ 0, (24) Lemma 9. For every ε > 0, there exists a positive functionh, such thath(x) → 0 as x → ∞ and such that the following holds: for every x ≥ 0, we have We can now finish the proof of Proposition 6.
Proof of Proposition 6. Applying Lemmas 8 and 9 to Equation (23), we get that for every ε > 0 and x ≥ 0, which implies the proposition.

Proofs of Lemmas 7, 8 and 9.
Proof of Lemma 7. Remember the definition of W ′ n in Equation (13). We decompose this expression using the spine and its children, along with the branching property in order to get the following identity : where W ′ n−k−1 (k, i) has the law of W ′ n−k−1 under P X ξ k i conditioned on G. We will use the fact that for any random variables X and X ′ , Now we condition on G, and using inequality (27), we get : We end the proof by bounding the indicator functions by 1.
Proof of Lemma 8. By Iglehart [Igl74] and Bolthausen [Bol76], we know that X ξn √ n converges in distribution to a positive random variable under the conditioned probability P * x · min k≤n X ξ k ≥ 0 , so √ ne −X ξn converges in distribution to 0 under the same conditioning. Moreover, the random variables √ n ε e −X ξn ∧ 1 are trivially bounded by 1. Hence which gives us that T 1 (x, ε, n) −→ n→∞ 0.
In order to prove Lemma 9, we introduce a lemma on convergence of functionals of the branching random walk conditioned on the spine staying above the origin until time n to corresponding functionals of the process conditioned on the spine staying above the origin for all time. It is inspired by a analogous result for random walks by Kersting and Vatutin [KV17, Lemma 5.2]. We recall that R is harmonic for the sub-Markov process obtained by killing (X ξn ) n≥0 when entering (−∞, 0). For x ≥ 0, define the probability measure P + x by and denote the associated expectation by E + x . Heuristically, under P + x , the motion of the spine is conditioned to stay non-negative all the time.
Lemma 10. With the above definitions, let (Y n ) n≥0 be a uniformly bounded sequence of random variables, adapted to (F n ) n≥0 . Let We will furthermore need the following estimate on the potential kernel of the spine under the law P + x : Lemma 11. Let f : R + → R + be a bounded, non-increasing function satisfying Furthermore, the above expectation is finite for every x ≥ 0.
Proof of Lemma 10. We give the proof for completeness, which follows almost exactly along the lines of Lemma 5.2 in Kersting and Vatutin [KV17]. Throughout the proof, fix x ≥ 0. Upon passing to subsequences and arguing by contradiction, we may and will assume that Y n converges almost surely to Y ∞ under P + x . Define m n (x) = P * x (min 0≤k≤n X ξ k ≥ 0). Then, for every l ≤ n, Now fix l ∈ N for the moment. By Lemma 3, the ratio m n−l (X ξ l )/m n (x) converges to R(X ξ l )/R(x) as n → ∞ and is bounded by a constant multiple of this quantity. By dominated convergence, we get where the last equality follows from the definition of P + x in Equation (29). Now let a > 1. Using again Lemma 3, we get that for some constant K, for every 1 ≤ l ≤ n, which converges to 0 as n then l tends to infinity, by the dominated convergence theorem. Hence, using Equation (31), we obtain that Now set M = sup n Y n ∞ and note that |Y ∞ | ≤ M almost surely. Then, and dividing by m n (x), we get Note that m ⌊an⌋ (x)/m n (x) → 1/ √ a as n → ∞ by Lemma 3. Hence, using (32), we get Letting a → 1 gives which was to be proven.
Proof of Lemma 11. By the definition of the law P + x , we have Let µ andμ be the renewal measures associated to the (absolute values of the) strictly descending and strictly ascending ladder heights of the random walk (X ξn ) n≥0 , respectively, see Appendix A. Since the random walk has finite variance by (8), the ladder heights have finite expectation, see e.g. Rogozin [Rog64]. Recall that R(x) = µ([0, x]) and defineR(x) =μ([0, x]). Also recall the constant c = from Appendix A. Using Theorem 14 we get Define the functionf : y → (1 + y)f (y), y ≥ 0. By (12), there exists C ∈ (0, ∞), such that In order to bound this double integral, we first integrate over z, then over y. For w ≥ 0, put g(w) = ∞ 0f (w + z)μ(dz). Then (35) implies We first bound g. For simplicity, supposeμ is non-arithmetic or that its span is equal to 1, the general case can be treated by a scaling argument. We then have for every w ≥ 0, since f is non-increasing, By Feller's renewal theorem,R(k + 1) −R(k) is uniformly bounded in k, hence, using again that f is non-increasing. The hypotheses on f now readily imply that g is bounded and By (33), (34) and (36), it remains to show that Let δ > 0. By (37), there exists y 0 ≥ 0 such that ∀y ≥ y 0 , g(y) ≤ δ. Then, The first term on the right-hand side equals δR(x) by definition and the second term converges to a constant as x → ∞, by the key renewal theorem (see Feller [Fel71,p. 363]). As a consequence, Since δ was arbitrary, this proves (38) and thus finishes the proof.
As mentioned above, our goal is to apply Lemma 10 to a suitable sequence of random variables (Y n ) n≥0 . Recall that Using Proposition 5, we get the following bound: with θ ′ the constant from Proposition 5. In order to bound the contribution of the children of the spine to the right-hand side of Equation (39), define a sequence of random variables for k ≥ 0: Using Lemma 4, we have for every k ≥ 0: Using the previous inequalities and equivalents, and plugging it in the expression of T 2 (x, ε, n), we obtain : In order to bound the expectation on the right-hand side of (41), we first bound Y n by where (the 4 in the exponent in the definition of Y ′ n is unimportant and serves to make notation simpler later on). Note that both sequences of random variables (Y ′ n ) n≥0 and (Y ′′ n ) n≥0 are adapted to the canonical filtration (F n ) n≥0 of the branching random walk and uniformly bounded by 1. By monotonicity, Y ′ n converges P + x -almost surely as n → ∞ to Y ′ ∞ defined by We now claim the following: b) Y ′′ n → 0 as n → ∞, in P + x -probability. Let us see how these two claims imply the statement of the lemma. First, applying Lemma 10 to the r.v.'s (Y ′ n ) n≥0 and Y ′ ∞ , we have Second, applying again Lemma 10 the r.v.'s (Y ′′ n ) n≥0 and using claim b) above, we get Plugging these two equalities into (42) and (41) yields (with some C > 0), Together with Claim a) this yields the lemma. It now remains to prove Claims a) and b) above. We start with Claim a). Recall that under P + x , the offspring distribution is biased by where the inequality is an easy consequence of (11) in Lemma 3. Hence, for every k ≥ 0, We decompose f : We now use Lemma 16 in the appendix and Assumptions (3) and (4) to bound certain integrals of f 1 and f 2 . First note that it can be obtained from Lemma B.1 (i) in Aïdékon [Aïd13] (and is implicit in the proof of part (ii) of that Lemma) that Assumptions (3) and (4) imply that We then apply Lemma 16 twice to the r.v. W 1 + Z 1 , once under the law E[W 1 ·] and with ρ(x) = x, and once under the law E[(Z 1 /E[Z 1 ]) ·] and with ρ ≡ 1. We then obtain Now we may compute: Note that f is bounded and non-increasing by definition. Equations (44) and (45) together with Lemma 11 then imply Claim a).
To prove Claim b) we use the following invariance principle by Caravenna and Chaumont [CC08]: the rescaled process (n −1/2 X ξ ⌊nt⌋ ) t≥0 converges in distribution under P + x to a threedimensional Bessel process as n → ∞. As a consequence, for every η ∈ (0, 1) there exists δ > 0, such that for large n, with probability at least 1 − η, we have X ξ k > δ √ n for every . So there is some positive constant c such that, with probability at least 1 − η, which converges to 0 in P + x -probability as n → ∞ since for every x ≥ 0, as shown above, This proves Claim b) and finishes the proof of the Lemma.

A Random walks on the half-line
Let (S n ) n≥0 be a random walk of oscillating type started at S 0 = 0. Denote its Markov dual byŜ n = −S n . Associated to the random walk are four ladder height processes with associated ladder times: • Strictly descending: (H n ) n≥0 , (L n ) n≥0 • Weakly descending: • Strictly ascending: (Ĥ n ) n≥0 , (L n ) n≥0 • Weakly ascending: Explicitly, H n = S Ln , where L 0 = 0 and L n+1 = min{k > L n : S k < S Ln }.
The other processes are defined analogously, with the "<" above replaced by, respectively, ≤, >, ≥. Let µ, µ = ,μ,μ = be the renewal measures of the processes (|H n |) n≥0 , with the other measures defined analogously. We first note the following fact: Lemma 12. There exists a constant c = ∈ (0, ∞), such that Furthermore, c = admits the following equivalent expressions Proof. The first statement (with c = equal to its first, respectively, second expression given in the statement) is (1.13) in [Fel71, Section XII.1]. The equivalence of the first two expressions for c = follows by considering for each n the time-reversed walk (S * k ) 0≤k≤n , where S * k = S n − S n−k , which has the same law as (S k ) 0≤k≤n (see [Fel71,Section XII.2]). Finally, the last expression for c = follows from setting s = 1 in Lemma 2 in [Fel71, Section XVIII.3].
From the "duality lemma" [Fel71, Section XII.2], we have the following equivalent representation of µ: The other measures have analogous representations with the "<" above replaced by, respectively, ≤, >, ≥.
We are interested in the random walk killed when it enters the negative half-line. Define two functions R,R : R + → R + by The function R is also called the renewal function associated to the strictly descending ladder heights of the random walk (S n ) n≥0 . The following lemma is originally due to Tanaka [Tan89], a good reference is [KV17, Lemma 4.2, p78] Lemma 13. The functions R andR are harmonic for the random walk killed when it enters (−∞, 0) and (−∞, 0], respectively, i.e. for all x ≥ 0, Finally, define two Green operators by For x = 0, Equation (46) (withμ instead of µ) gives In general, the Green operators have the following expressions: Theorem 14. For every measurable, non-negative function f : R + → R + , we have We were not able to find this result in the literature in this generality. For random walks on the integers, it was proven by Spitzer [Spi76, P19.3, p209]. The general result can, with some effort 1 , be deduced from a corresponding result for Lévy processes [Tan04], using the fact that the Green operators and the renewal measures defined above are equal to the corresponding ones for the compound Poisson process associated to the random walk. However, all of these proofs make use of Sparre Andersen's identities (see e.g. [ Equivalently, by (46) and Lemma 12, for every measurable, non-negative function g, we have The expressions for Gf (x) andḠf (x) then follow by setting in the above equation g(y, z) = f (x + y + z)1 y≥−x and g(y, z) = f (x + y + z)1 y>−x , respectively.
Then Y n converges in probability to Y as n → ∞.
Proof of Lemma 15. The proof proceeds in three steps.
First step: Fix a representative µ n of the law of Y n conditioned on G n . We may see µ n as a random measure on the compact space [0, ∞]. Hence, the sequence (µ n (ω)) n≥0 is tight for every ω ∈ Ω. We wish to show that µ n weakly converges to δ Y almost surely as n → ∞.

Second step:
We wish to show that the law of Y conditioned on G n weakly converges to δ Y as well, almost surely as n → ∞. Note that, for every λ ∈ Q + , E[e −λY |G n ] n is a bounded G n -martingale, so it converges almost surely towards E[e −λY |G ∞ ] = e −λY . We can then use the same argument as above to obtain that the law of Y conditioned on G n converges a.s. towards δ Y .
Third step: Denote byμ n the law of the pair (Y n , Y ) conditioned on G n . By the steps 1 and 2 above and Slutsky's lemma,μ n converges a.s. towards δ (Y,Y ) . Hence, putting g(x, y) = |x − y| ∧ 1, which is a bounded and continuous function, we get E |Y n − Y | ∧ 1 G n = g(x, y)dμ n (x, y) −→ n→∞ 0 a.s.
Using the dominated convergence theorem, we conclude that E[|Y n − Y | ∧ 1] converges to 0, i.e. that Y n converges in probability to Y . This concludes the proof.

C A Tauberian-type lemma
Lemma 16. Let Y be a positive random variable, ρ a regularly varying function at ∞ of index strictly greater than −1 and define ∀y ≥ 0, ϕ(y) = E[e −y Y ∧ 1]. Then the following statements are equivalent:
Changing variables, we have Changing again variables, we have for every y ≥ 0, y∨1 1 l(t) dt t = log + y 0 ρ(z) dz.
The two previous displays give, Collecting the above identities yields the statement of the lemma.