Local asymptotics for the first intersection of two independent renewals

We study the intersection of two independent renewal processes, $\rho=\tau\cap\sigma$. Assuming that $\mathbf{P}(\tau_1 = n ) = \varphi(n)\, n^{-(1+\alpha)}$ and $\mathbf{P}(\sigma_1 = n ) = \tilde\varphi(n)\, n^{-(1+ \tilde\alpha)} $ for some $\alpha,\tilde \alpha \geq 0$ and some slowly varying $\varphi,\tilde\varphi$, we give the asymptotic behavior first of $\mathbf{P}(\rho_1>n)$ (which is straightforward except in the case of $\min(\alpha,\tilde\alpha)=1$) and then of $\mathbf{P}(\rho_1=n)$. The result may be viewed as a kind of reverse renewal theorem, as we determine probabilities $\mathbf{P}(\rho_1=n)$ while knowing asymptotically the renewal mass function $\mathbf{P}(n\in\rho)=\mathbf{P}(n\in\tau)\mathbf{P}(n\in\sigma)$. Our results can be used to bound coupling-related quantities, specifically the increments $|\mathbf{P}(n\in\tau)-\mathbf{P}(n-1\in\tau)|$ of the renewal mass function.


Intersection of two independent renewals
We consider two independent (discrete) renewal processes τ and σ, whose law are denoted respectively P τ and P σ , and the renewal process of intersections, ρ = τ ∩ σ.We denote P = P τ ⊗ P σ .
The process ρ appears in various contexts.In pinning models, for example, it may appear directly in the definition of the model (as in [1], where σ represents sites with nonzero disorder values, and τ corresponds to the polymer being pinned) or it appears in the computation of the variance of the partition function via a replica method (see for example [20]), and is central in deciding whether disorder is relevant or irrelevant in these models, cf.[3].
When τ and σ have the same inter-arrival distribution, ρ 1 is related to the coupling time of τ and σ, if we allow τ and σ to start at different points.In particular, in the case µ := E[τ 1 ] < +∞, the coupling time ρ 1 has been used to study the rate of convergence in the renewal theorem, see [16,17], using that Hence, if σ is delayed by a random X having the waiting time distribution ν of the renewal process (and denoting P ν the delayed law of σ), we have that P ν (n ∈ σ) = 1 µ for all n, and so P τ ⊗ P ν (ρ 1 > n) gives the rate of convergence in the renewal theorem.This question has also been studied via a more analytic method in [18,13].Denoting u n := P(n ∈ τ ) the renewal mass function ot τ , Rogozin [18] proved that +∞ k>n P(τ 1 > k) as n → ∞.In this paper, we consider only the non-delayed case, with a brief exception to study |u n − u n−1 |, see Theorem 1.6.1.1.Setting of the paper.We assume that there exist α, α 0 and slowly varying functions ϕ, ϕ such that (1.1) (As mentioned above, τ and σ are non-delayed, if not specified otherwise.)With no loss of generality, we assume that α α.We define µ n := E[τ 1 ∧ n] and µ n := E[σ 1 ∧ n] the truncated means, and also E[τ 1 ] = µ = lim n→∞ µ n ∞, and similarly µ = lim n→∞ µ n .
The assumption (1.1) is very natural, and is widely used in the literature (for example, once again in pinning models).It covers in particular the case of the return times τ = {n , S 2n = 0}, where (S n ) n 0 is the simple symmetric nearestneighbor random walk on Z d (see e.g.[8, Ch.III] for d = 1, [14,Thm. 4] for d = 2 and [6, Thm.4] for d = 3), or the case τ = {n , S n = 0} where (S n ) n 0 an aperiodic random walk in the domain of attraction of a symmetric stable law, see [15,Thm. 8].
In Section 2, we recall the strong renewal theorems for τ and σ under assumption (1.1) (from [2,5,7] in the recurrent case, [11,App. A.5] in the transient case), as well as newer reverse renewal theorems (from [2]).We collect the results when τ is recurrent in the following table, denoting r n := P(τ 1 > n), and we refer to (2.1) for the transient case.
From Table 1 and (2.1), the renewal mass function of ρ satisfies for some θ * 0 and slowly varying function ψ * (n).For example, if both τ and σ are recurrent we have if also α, α ∈ (0, 1), then ψ * is a constant multiple of 1/ϕ ϕ.If instead both τ and σ are transient then θ * = 2 + α + α.Note that ρ is transient for θ * > 1 and recurrent for θ * < 1. Recalling that α α, if we define then, based on Theorem 2.1 in the transient case and Table 1 in the recurrent case, we expect P(ρ 1 = n) to be expressed as n −(1+α * ) multiplied by a slowly varying function.
Observe that the renewal function of ρ, defined as is always regularly varying, with exponent α * = 1 − θ * in the recurrent case and 0 in the transient case.
Our goal is to derive from (1.1) the local asymptotics of the inter-arrival distribution, that is, the asymptotics of P(ρ 1 = n).For general renewal processes ρ these asymptotics should not be uniquely determined by the asymptotic behavior of the renewal mass function (1.2) (which is known is our case), but the extra structure given by ρ = τ ∩ σ under (1.1) makes such determination possible.
Remark 1.1.For ρ to be recurrent, it is necessary that both τ and σ are recurrent, so (1.3) holds.It follows from Table 1 that ρ is recurrent if and only if one of the following also holds: Case of transient ρ.Since P(n ∈ ρ) is summable (with sum E(|ρ|)), we must have θ * 1.Here the following is immediate from ([2], Theorem 1.4), given below as Theorem 2.1.
We will prove Theorem 1.3 in Sections 3-4.The cases (i) and (ii) are essentially immediate from known relations of form P(ρ 1 > n) ∼ c/U * n and are given in Section 3. Item (iii) seems to be a new result, and is treated in Section 4 via a probabilistic method.Note that in all cases, P(ρ 1 > n) is regularly varying with exponent −α * .
To obtain the asymptotics of P(ρ 1 = n) from Theorem 1.3 (in the case α * > 0), or using the weak reverse renewal Theorem 2.2 (in the case α * = 0), we only need to show that P(ρ 1 = k) is approximately constant on an interval [(1 − ε)n, n] with ε small.To that end we have the following lemma, which we will prove in Section 5.
Lemma 1.4.Assume (1.1), and suppose that ρ is recurrent.Let v n := P(ρ 1 > n) 2 P(n ∈ ρ).Then for every δ > 0, there exists some ε > 0 such that, if n is large enough we have for all k ∈ (0, εn) We will see later that v n = O(P(ρ 1 = n)), so Lemma 1.4 is actually true without the δv n terms, but we will not need this improved result.
We can now state our main theorem, which we will prove in Section 6.
Rotating the lattice by π/4 shows that this is the same as the return time distribution for (S n ) n 0 the SSRW on Z 2 (the even return times: ρ = {n, S 2n = 0}).Hence (1.11) is a classical result of Jain and Pruitt [14].
Theorem 1.6.Let τ be a recurrent renewal process satisfying (1.1), and let u n := P(n ∈ τ ) be its renewal mass function.There exist constants c i > 0 such that (1.13) Note that the right side of (1.13) is of order P(τ 1 > n) when α > 1/2.It is summable precisely when µ = E[τ 1 ] < +∞, and then, by Theorem 1.3(iii), (1.13) says |u n − u n−1 | c 3 P(τ 1 > n).This gives additional information compared to the known asymptotics from [18].We can sum (1.13) to obtain |u n − 1/µ| c 3 k>n P(τ 1 > k), which is of the right order, but we cannot obtain the proper constant 1/µ 2 .We also mention the works of Topchii [21,22], treating the case when τ 1 is a continuous random variable with of the renewal function, and also u (t).Under some additional regularity conditions on f (t), letting m(t) This is a better estimate than its analog in the infinite-mean case in Theorem 1.6, but the techniques of [21,22] do not appear adaptable to the discrete setting.
Proof of Theorem 1.6.The second inequality in (1.13) is a direct consequence of Theorem 1.3(iii) and Table 1, so we prove the first one.Take σ a renewal process independent from τ , with the same inter-arrival distribution, but starting from σ 0 = 1.We can couple τ and σ so that τ = σ on [ρ 1 , ∞).Then denoting the corresponding joint distribution by P 0,1 we have By Lemma A.1 there is a constant C 0 such that and similarly for P 0,1 and observe that for any x > 0 3), it follows that there is a constant c 4 > 0 such that P 0,1 (ρ 1 > n/4) c 4 P(ρ 1 > n), and hence Theorem 1.6 follows.
1.4.Organization of the rest of the paper and idea of the proof.First of all, we recall renewal and reverse renewal theorems in Section 2, which are used throughout the paper.Sections 3-4 are devoted to the proof of Theorem 1.3.Items (i)-(ii) are dealt with using Theorem 8.7.3 in [4], and our main contribution is the proof of item (iii).The underlying idea is that, in order to have {ρ 1 > n} either one of τ or σ typically makes a jump of order at least n.We decompose P(ρ 1 > n) according to the number k of steps before τ (resp.σ) escapes beyond n by a jump larger than (1 − ε)n: we find that the expected number of steps is approximately µ n (resp.µ n ), giving Theorem 1.3(iii).
Sections 5-6 contain the proof of Theorem 1.5.In Section 5, we prove Lemma 1.4 in two steps.First, we show that when ρ 1 = n, having only gaps of length δn is very unlikely ; then, given that there is, say in τ , a gap larger than δn, we can stretch it (together with associated σ intervals) by k δn at little cost: this proves that P(ρ 1 = n) ≈ P(ρ 1 = n + k).In Section 6, we conclude the proof of Theorem 1.5 by combining Lemma 1.4 with Theorem 1.3.

On renewal theorems.
In what follows we assume that the inter-arrival distribution of τ satisfies (1.1).

Recurrent case.
Here there are multiple subcases, as follows.
2.2.On reverse renewal theorems.In the opposite direction, if in place of (1.1), one assumes that P(n ∈ τ ) is regularly varying with exponent 1 − α, then for 0 α < 1 the asymptotics of P(τ 1 > n) follow from [4, Thm.8.7.3].It is not possible in general to deduce the asymptotics of P(τ 1 = n), which need not even be regularly varying.However, in certain cases, one can recover at least some behavior of P(τ 1 = n) from that of P(n ∈ τ ) when the latter is regularly varying; we call such a result a reverse renewal theorem.Specifically, if the renewal function is slowly varying (as happens in the case of transient τ or α = 0), the following theorems apply.
Theorem 2.2 (Theorem 1.3 in [2]).If P(n ∈ τ ) is regularly varying, and if U n is slowly varying, then there exists some ε n n→∞ → 0 such that One can therefore obtain the local asymptotics of P(τ 1 = n) from this last theorem when one can show P(τ 1 = n) is approximately constant over an interval of length o(n), as done in Lemma 1.4.

Proof of Theorem 1.3(iii)
For α * 1 (i.e.α 1), we cannot extract the behavior of P(ρ 1 > n) directly from that of U * n as in Section 3, and we need a preliminary result: we prove that P(ρ 1 > n) is regularly varying and hence for any ε > 0 we have (4.1) In Section 4.1, we prove (4.1), with the help of [10].In Section 4.3, we prove an upper bound for P(ρ 1 > n).Finally, in Section 4.4, we prove the corresponding lower bound.4.1.Proof of (4.1).A sequence {u n } is said to be in the de Haan class Π if there exists a slowly varying sequence n such that for all λ > 0, We write RVS −α for the set of regularly varying sequences of index −α.We can state the results of Frenk [10] as follows.
If α = α = 1, then Proposition 4.1 tells that the slowly varying sequences u n = P(n ∈ τ ), u n = P(n ∈ σ) are both in Π, with some corresponding slowly varying sequences n , n .(One expects n ∼ ϕ(n) but we do not have or need proof of this.)Therefore, letting L n := n u n + n u n , the product sequence P(n ∈ ρ) = u n u n satisfies for all λ > 0, so the product sequence is in Π. Applying Proposition 4.1 again, we see that P(ρ 1 > n) is regularly varying with index −1.
If α = 1, α > 1, then {u n } is in Π (with some corresponding slowly varying sequence n ), and u n − 1 µ is regularly varying with index 1 − α.Hence, where we used that u λn − u n is in RVS 1− α so that the second term in the sum goes to 0 (since u n / n is regularly varying with index 0).Hence P(n ∈ ρ) = u n u n is in Π, and applying Proposition 4.1, we get that P(ρ 1 > n) is regularly varying with index −1.
We will present the rest of our proof of Theorem 1.3 in the whole range 1 α α even though it is now needed only for α = 1; this adds no complexity.The advantage is that it is a more probabilistic approach, in that we use Proposition 4.1 only to get the regular variation of P(ρ 1 > n), and avoid using the un-probabilistic (4.4) (with ν = ρ) to estimate P(ρ 1 > n) as in (4.7).The method also provides an interpretation of the terms µ n , µ n appearing in Theorem 1.3(iii).4.2.Some useful preliminary lemmas.Before we prove Theorem 1.3(iii), we need two technical lemmas.Lemma 4.2.Let τ, σ be independent renewal processes, suppose ρ = τ ∩ σ is recurrent with E(σ 1 ) < ∞, and let K := min{k 1 : τ k ∈ σ}.Then E(K) = E(σ 1 ).
Let K 1 , K 2 , . . .be i.i.d.copies of K and let Then τ Sm has the distribution of ρ m , so using (4.8), and the lemma follows.Write P x,y (•) for P(• | τ 0 = x, σ 0 = y), and write E x,y the corresponding expectation.
Proof Fix x δn and let Since P(j ∈ τ ) is regularly varying, given η > 0, there exists A (large) such that for δ > 0, for n large we have for all x δn and Aδn j n that (4.12) P(j ∈ τ ) − P(j Since U * k is regularly varying, with positive index since α * > 0, if δ, and therefore Aδ, is sufficiently small then for large n we have U * Aδn η 2 U * n .With (4.11) this gives that for large n, With (4.10), this proves (4.9) for large n.
Now consider α 1, meaning P(k ∈ τ ) is slowly varying.Given η > 0, for any δ > 0 we can choose A (small this time) so that U * Aδn η 2 U * n for large n.Inequality (4.12) holds for all j Aδn and x δn, for n large, so (4.13) is valid and (4.9) follows.

Upper bound for
the starting and ending points of such a gap are τ k−1 , τ k or σ k−1 , σ k .Let S be the first starting point of a long gap in τ or σ, and let T be the ending point of the gap that starts at S. (To make things well-defined, if both τ and σ have long gaps starting at S, then we take T to be the first endpoint among these two gaps.)Then For fixed n, we let τ1 have the distribution of τ 1 given τ 1 (1 − 2ε)n, and similarly for σ1 .Let τ and σ be renewal processes with gaps distributed as τ1 and σ1 , respectively, and let K := min{k 1 : τ k ∈ σ} and K := min{k 1 : τk ∈ σ}.Then, we have Thus for large n we have A similar computation holds for P(ρ 1 T, S ∈ σ) so we have for large n: We now need a much smaller bound for the first term on the right side of (4.14).Define U := min τ ∩ (εn, ∞) and V := min σ ∩ (εn, ∞).Then We may now apply Lemma 4.3 for the last probability.Fix η > 0.Then, since α α 1 for n large enough, Therefore, summing over u, v, the right side of (4.17) is bounded by ηP(ρ 1 > εn, U < V ), and a similar bound holds when U > V .Hence, combining this with with (4.14) and (4.16), we get that Now we may use (4.1) to control the last term: we finally get that, provided η is small enough, for large n, (4.20)

4.4.
Lower bound for P(ρ 1 > n).We use a modification of our earlier truncation.
Fix n and, analogously to τ , σ, let τ and σ be renewal processes with gaps , respectively, and let ρ = τ ∩ σ and K := min{k 1 : τ k ∈ σ}.We call a gap in τ or σ large if its length is n + 1.
The last sup in bounded as in (4.29).For the first probability on the right, using the renewal theorem when α > 1 and [7] when α = 1, we get that there is a constant c 5 such that The convergence to 0 is straightforward when α > 1, and uses that ϕ(x)/µ x → 0 as x → ∞ when α = 1 (see for example Theorem 1 in [9, Ch.VIII, Sec.9]).It follows that the sup in (4.28) approaches 0 as n → ∞.The second probability on the right side of (4.27) is handled similarly, and this proves (4.23).
We now turn to (4.22).We show that for any η > 0, we can take n large enough so that for any j 1, (4.31) P(J τ j + 1) ηP (J τ j) .

Proof of Lemma 1.4: Stretching of gaps
By assumption ρ is recurrent, and we need to show that when n is large P(ρ 1 = n) ≈ P(ρ 1 = n + k) for all k ∈ (0, εn), with ε 1.The idea is to take the set of trajectories of τ and σ such that ρ 1 = n, and to stretch them slightly so that ρ 1 = n + k, see Figure 1.In Section 5.1, we prove that for some δ > 0, conditioned on ρ 1 = n, the largest gap of τ and σ in [0, n] is larger than δn with high probability; see Lemma 5.1.Assume that it is a τ -gap, and that it has length m.Then, in Section 5.2, we show that for ε δ we can stretch this τ -gap by k εn m, and stretch σ inside this τ -gap by the same k, without altering the probability significantly.
How to "stretch" trajectories, to go from ρ 1 = n to ρ 1 = n + k : we identify the largest gap in τ (which is larger than δn with great probability, see Lemma 5.1) and we stretch it by k, while at the same time stretching one of the three associated σ-intervals (the largest of t 1 , t 2 , t 3 ).See the proof of Lemma 5.2 for more detailed explanations.
5.1.Probability of having a large gap.Denote by A δ the event that there is a gap (either in σ or τ ) longer than δn: (5.1) We will show that A c δ contributes only a small part of {ρ 1 = n}.Recall that v n = P(ρ 1 > n) 2 P(n ∈ ρ) .
Lemma 5.1.Assume (1.1).There exist c 6 > 0 and δ 0 such that if δ ∈ (0, δ 0 ), then for n sufficiently large, Proof On the event {ρ 1 = n} ∩ A c δ , all τ and σ gaps are smaller than δn, and therefore all blocks of length at least δn are visited by both τ and σ.We control probabilities in each third of [0, n] separately.To that end, define τ = max τ ∩ (0, n/3), σ = max σ ∩ (0, n/3), and define events (5.2) Symmetrically we obtain Middle third.We need to bound the last probability in (5.6).We divide the interval [n/3, 2n/3] into blocks B i = [a i−1 , a i ] of length Aδn where A is a (large) constant to be specified.We denote by d (i) τ and f (i) τ the first and last renewals, respectively, of τ in B i , and similarly for d Using again Lemma A.1, we obtain The same bound holds if I = 3.If I = 2 we have t 2 m/3, so k/t 2 6δ 2 , and provided that δ is small The claim (5.13), and hence the lemma, now follow.We proceed with the proof of Lemma 1.4.Indeed, the second inequality in (1.8) is immediate from Lemmas 5.1 and 5.2.Also, since v n is regularly varying, Lemma 5.1 gives that for δ small, for any j ∈ (0, δ 3 n], P(ρ 1 = n − j ; A c δ (n − j)) 2e −c 6 /δ v n .This and Lemma 5.2 yield that for any k ∈ (0, δ 3 n] ⊆ (0, 2δ 3 (n − k)], We claim that, if ρ is recurrent, there is a constant c 10 > 0 such that for sufficiently small ε > 0, when n is large, (6.1) v n c 10 A ± n (ε) .It is sufficient to prove this for A + n (ε), since v n is regularly varying.Consider first α * = 0.It follows readily from (3.1) and Theorem 2.2 that for small ε, when n is large we have (6.2) Next consider α * ∈ (0, 1).Here α * = 1 − θ * , so by Theorem 1.
If α * ∈ (0, 1), then by Theorem 1.3(i), when δ is small we have for large n and again part (i) of the theorem follows from (6.4) and (6.

2 n Table 1 .
Asymptotics of the renewal mass function if τ is recurrent, and has inter-arrival distribution P(τ 1