Fixed Points of the Multivariate Smoothing Transform: The Critical Case

Given a sequence $(T_1, T_2, ...)$ of random $d \times d$ matrices with nonnegative entries, suppose there is a random vector $X$ with nonnegative entries, such that $ \sum_{i \ge 1} T_i X_i $ has the same law as $X$, where $(X_1, X_2, ...)$ are i.i.d. copies of $X$, independent of $(T_1, T_2, ...)$. Then (the law of) $X$ is called a fixed point of the multivariate smoothing transform. Similar to the well-studied one-dimensional case $d=1$, a function $m$ is introduced, such that the existence of $\alpha \in (0,1]$ with $m(\alpha)=1$ and $m'(\alpha) \le 0$ guarantees the existence of nontrivial fixed points. We prove the uniqueness of fixed points in the critical case $m'(\alpha)=0$ and describe their tail behavior. This complements recent results for the non-critical multivariate case. Moreover, we introduce the multivariate analogue of the derivative martingale and prove its convergence to a non-trivial limit.


Introduction
Let d ≥ 2 and (T i ) i≥1 be a sequence of random d × d-matrices with nonnegative entries. Assume that N := #{i : T i = 0} is finite a.s. We will presuppose throughout that the (T i ) i≥1 are ordered in such a way that T i = 0 if and only if i ≤ N . Given a random variable X ∈ R d ≥ = [0, ∞) d , let (X i ) i≥1 be i.i.d. copies of X and independent of (T i ) i≥1 . Then N i=1 T i X i defines a new random variable in R d ≥ . If it holds that where d = means same law, then we call the law L (X) of X a fixed point of the multivariate smoothing transform (associated with (T i ) i≥1 ). By an slight abuse of notation, we will also call X a fixed point.
This notion goes back to Durrett and Liggett [18]. For d = 1, they proved (see also [26,4]) that properties of fixed points are encoded in the function m(s) := E N i=1 T s i (here (T i ) i≥1 are nonnegative random numbers): If m(α) = 1 and m ′ (α) ≤ 0 for some α ∈ (0, 1] and some non-lattice and moment assumptions are satisfied, then there is a fixed point which is unique up to scaling. Conversely, the condition m(α) = 1, m ′ (α) ≤ 0 for some α ∈ (0, 1] is also necessary for the existence of fixed points. Moreover, if ψ(r) = E e −rX is the Laplace transform of a fixed point, then there is a positive function L, slowly varying at 0, and K > 0 such that The function L is constant if m ′ (α) < 0 and L(t) = (|log t| ∨ 1) if m ′ (α) = 0, the latter being called the critical case. For α < 1, the property (1.2) implies that the fixed points have Pareto-like tails with index α, i.e. lim t→∞ t −α P (X > t) /L(1/t) ∈ (0, ∞), see [26] for details. Tail behavior in the case α = 1, in which there is no such implication, is investigated in [21,26,16].
Existence and uniqueness results in the multivariate setting d ≥ 2 for the non-critical case have been recently proved in [27]. The aim of this note is to provide the corresponding result for the multivariate critical case. In order to so, we will first review necessary notation and definitions from [27], in particular introducing the multivariate analogue of the function m, as well as a result about the existence of fixed points in the critical case. Following the approach in [9,10,25] we will then prove that a multivariate regular variation property similar to (1.2) holds for fixed points (with an essentially unique, but yet undetermined slowly varying function L), which we use in order to prove the uniqueness of fixed points, up to scalars. Under some extra (density) assumption, we identify the slowly varying function to be the logarithm also in the multivariate case, which allows us to introduce and prove convergence of the multivariate version of the so-called derivative martingale, a notion coined in [11]. It appears prominently in the limiting distribution of the minimal position in branching random walk, see [1,2,11] for details and further references.

Statement of Results
We start by introducing the assumptions and some notation needed therefore. Write P(R d ≥ ) for the set of probability measures on R d ≥ and M := M (d × d, R ≥ ) for the set of d × d-matrices with nonnegative entries. Given a sequence T := (T i ) i≥1 of random matrices from M, only the first N of which are nonzero, with N < ∞ a.s., we aim to determine the set of fixed points of the mapping S : P(R d ≥ ) → P(R d ≥ ), with law η and independent of (T i ) i≥1 .
Without further mention, we assume (Ω, B, P) to be a probability space which is rich enough to carry all the occurring random variables.
2.1. The weighted branching process and iterations of S. Let V := ∞ n=0 N n be a tree with root ∅ and Ulam-Harris labeling. We write |v| = n if v = v 1 · · · v n ∈ {1, . . . , N } n , v|k = v 1 · · · v k for the ancestor in the k-th generation and vi = v 1 · · · v n i for the i-th child of v, i ∈ N.
To each node v ∈ V assign an independent copy T (v) of T and, given a random variable X ∈ R d ≥ , as well an independent copy X(v) of X, such that (T (v)) v∈V and (X(v)) v∈V are independent. Introduce a filtration by Upon defining recursively the product of weights along the path from ∅ to v by we obtain the iteration formula which in terms of Laplace transforms φ(x) = E e − x,X becomes 2.2. Assumptions. As noted before, we assume (A1) the r.v. N := #{i : T i = 0} equals sup{i : T i = 0} and is finite a.s. and N ≥ 1 a.s. and 1 < EN < ∞. This assumption guarantees, that the underlying Galton-Watson tree (consisting of the nodes v with L(v) = 0) is supercritical and allows to define a probability measure µ on M by On the (support of the) measure µ, we will impose the following condition (C): Note that if a ∈ M is an allowable matrix, then we can define its action on Furthermore, we need a multivariate analogue of a non-lattice condition: Recall that a matrix a with all entries positive has a algebraic simple dominant eigenvalue λ a > 0 with corresponding normalized eigenvector v a the entries of which are all positive.
(A3) The additive group generated by {log λ a : a ∈ [supp µ] has all entries positive} is dense in R Let M, (M n ) n∈N be i.i.d. random matrices with law µ, and write Π n := n i=1 M n . Then it is shown in [27], that the multivariate analogue of the function m is given by which is finite on On I µ , it is log-convex, and thus the left-handed derivatives m ′ (s − ) exist. We assume to be in the critical case, i.e.
For the multivariate case, the classical T-log T condition splits into an upper bound and a lower bound: Introducing ι(a) := inf u∈S ≥ |au|, we observe that ι(a) > 0 for a ∈ M, and that for all u ∈ S ≥ , Note that if a is invertible, then a −1 −1 ≤ ι(a).
Sometimes we will impose the stronger condition (A6) There is c > 0 such that P ι(M ⊤ ) ≥ c = 1, which together with the first part of (A5) implies the second part of (A5).
In the second part of the paper, we will need stronger assumptions on µ, which guarantee that the associated Markov random walk (to be defined below) is Harris recurrent. We will consider the absolute continuity assumption where l d×d denotes the Lebesgue measure on the set of d × d-matrices, seen as a subset of R d 2 . A similar assumption for invertible matrices appears in [23,Theorem 6] and subsequently in [6]. It is easy to check that (A3c) implies (A3). We will consider as well a quite degenerate case, namely (A3f) supp µ is finite and consists of rank-one matrices, and (A3) holds.
Note that an allowable rank-one matrix a has all entries positive, the columns are multiples of a vector v a ∈ int(S ≥ ), and consequently, a · u = v a for all u ∈ S ≥ . We will also impose a stronger moment condition, namely Note that (A7) implies < ∞ for small enough δ > 0.
Indeed, for any 0 < s < 1 and p such that 1/p = (1 − s)/p 0 + s/p 1 , using first Jensen's and then Hölder's inequality, the random variable N i T i s has finite moment of the order p:

Previous Results.
We have the following existence result in the critical case. The main contribution of this paper is to prove the uniqueness of this fixed point, and to give asymptotic properties of its Laplace transform. It is convenient to introduce polar coordinates (r, u) ∈ [0, ∞) × S ≥ on R d ≥ . Moreover, we will use that for s ∈ I µ , the operators P s and P s * , being self-mappings of the set C (S ≥ ) of continuous functions on S ≥ and defined by are quasi-compact with spectral radius equal to k(s) := (E N ) −1 m(s) and there is a unique positive continuous functions H s ∈ C (S ≥ ) and unique probability measures ν s , ν * s ∈ P(S ≥ ) such that and the following relation holds: See [17] for details and proofs. Using Eq. (2.5), we can extend H s to a s-homogeneous function on R d ≥ , i.e.
Using H s , we are now going to provide a many-to-one lemma.
Then, by Eq. (2.4), we see that defines a Markov transition operator on S ≥ × R. Let (U n , S n ) n be a Markov chain in S ≥ × R with transition operator Q α and denote the probability measure on the path space (S ≥ × R) N with initial values (U 0 , S 0 ) = (u, s) by P α u,s and the corresponding expectation symbol by E α u,s . Most times, we will use the shorthand notations P α u = P α u,0 and P α η = P α u,s η(du, ds) for a probability measure η on S ≥ × R.
We call (U n , S n ) n∈N the associated Markov random walk. It generalizes the concept of the associated random walk in [18,26]. In particular, it holds for all u ∈ S ≥ , that lim n→∞ S n n = 0 P α u -a.s., see [17, defines a martingale with respect to the filtration B n , which we will show to be the multivariate analogue of the derivative martingale. In fact, b can be considered as the derivative of H α , see [15, (7.9)].

Main Results.
Our first result proves that, upon imposing the non-lattice condition (A2) and the stronger moment reap. boundedness assumptions (A6)-(A7), the fixed point given by Proposition 2.2 is unique up to scaling, and satisfies an multivariate analogue of the regular variation property (1.2).
Theorem 2.4. Assume (A1) -(A7) . Then there is a random measurable function Z : S ≥ → [0, ∞) with P (Z(u) > 0) = 1 for all u ∈ S ≥ , such that X is a nontrivial fixed point of (1.1) on R d ≥ if and only if its Laplace transform satisfies There is an essentially unique positive function L, slowly varying at 0 with lim inf r→0 L(r) = ∞, such that Remark. Essentially unique means that if L 1 and L 2 satisfy Eq. (2.9), then lim r→0 L 1 (r)/L 2 (r) = 1. Depending on the value of α, additional information can be extracted from Eq. (2.9).
Moreover, the slowly varying function L in Eq. (2.9) can be chosen as (a scalar multiple of ) L(r) = |log r| ∨ 1.

2.5.
Structure of the Paper. The further organization is as follows: In Section 3, we study the associated Markov random walk, which is recurrent due to the criticality assumption. Under assumptions (A3c), a regeneration property known from the theory of Harris recurrent Markov chains will be shown to hold. In Section 4, we prove that each fixed point satisfies (2.9), which is a main ingredient in the proof of uniqueness in Section 5. In Section 6, we turn to the proof of Theorem 2.5 and study the behavior of the Laplace transform of the fixed point. We conclude with Section 7, where the convergence of the derivative martingale is proved.
Acknowledgements. The main part of this work was done during mutual visits to the Universities of Muenster and Warsaw, to which we are grateful for hospitality. S.M. was partlially supported by the Deutsche Forschungsgemeinschaft (SFB 878). K.K. was partially supported by NCN grant DEC-2012/05/B/ST1/00692.

The Associated Markov Random Walk
In this section, we provide additional information about the associated Markov random walk, in particular about its stationary distribution and recurrence properties. Moreover, we show that it is Harris recurrent and satisfies a minorization condition under the additional assumption (A3c).
3.1. The Associated Markov Random Walk. The Markov chain (U n , S n ) n constitutes a Markov random walk, i.e. for each n ∈ N, the increment S n − S n−1 depends on the past only through U n−1 , this follows from the definition of Q α . Such Markov random walks which are generated by the action of nonegative matrices where first studied by Kesten in his seminal paper [23], and very detailed results are given in [17]. For the reader's convenience, we cite those who are important for what follows. Recall that we denoted the Perron-Frobenius eigenvalue and the corresponding normalized eigenvector of a matrix a ∈ int(M) by λ a resp. v a .
The derivative of m at 1 can be calculated to Source: Sections 4 and 6 of [17].

3.2.
Recurrence of Markov Random Walks. By Proposition 3.1 (3), in the critical case m ′ (α − ) = 0 the Markov random walk (S n ) n is centered in the stationary regime and satisfies a strong law of large numbers. Alsmeyer [3] studied recurrence properties of such Markov random walks, which we will make use of.
Proof. Let A be any open set A with π α * (A) > 0. By the strong law of large numbers for Markov chains (see [14]), thus, using f = 1 A , we obtain that P α π α * (U n ∈ A infinitely often) = 1. Denote the successive hitting times of A by τ n . Then (U τn , S τn ) is again a Markov random walk, and π A := π α * (· ∩ A)/π α * (A) is the stationary probability measure for U τn . The aperiodicity assumption (A3) implies that (U n , S n ) are nonarithmetic in the sense of [3], see [15] for details. Lemma 1 in [3] gives that (U τn , S τn ) is nonarithmetic as well. Using (3.3) with f = 1 A again, this gives that n/τ n → π(A) a.s. Combining this with the strong law of large numbers (3) in Proposition 3.1, we deduce that s.. Then Theorem 2 in [3] (for the nonarithmetic case) gives that the recurrence set {s ∈ R : for all ε > 0, S τn ∈ (s − ε, s + ε) infinitely often } equal to R, which shows that P α π α * (S τn ∈ B infinitely often) = 1. In the arithmetic case, the recurrence set is still a closed subgroup of R, which implies the oscillation property.
3.3. Implications of Assumptions (A3c) and (A3f). In this subsection, we explain how Assumptions (A3c) and (A3f) imply that the Markov chain (U n ) n∈N has an atom (possibly after redefining it on an extended probability space), which can be used to obtain a sequence (σ n ) n∈N of regeneration times for the Markov random walk (U n , S n ), i.e. stopping times such that(U σn , S σn − S σn−1 ) n∈N becomes an i.i.d. sequence. Namely, we are going to prove the following lemma for the Markov chain (U n , Y n ) := (U n , S n − S n−1 ).
. On a possibly enlarged probability space, one can redefine (U n , Y n ) n≥0 together with an increasing sequence (σ n ) n≥0 of random times such that the following conditions are fulfilled under any P α u , u ∈ S ≥ : (R1) There is a filtration G = (G n ) n≥0 such that (U n , Y n ) n≥0 is Markov adapted and each σ n a stopping time with respect to G, moreover, This lemma is quite immediate under condition (A3f), for Proposition 3.1, (2) shows that the unique stationary measure π α * for (U n ) under P α u is supported on the finite set S := {v a : a ∈ supp µ} (note that v ab = v a if a has rank one, thus the semigroup [supp µ] can be replaced by supp µ.) Moreover, independent of the initial value u ∈ S ≥ , U 1 ∈ S P α u -f.s., i.e. S ≥ \ S is uniformly transient for (U n ) n∈N , and thus we can study (U n ) n∈N on the finite state space S. Then, if (σ n ) n∈N is a sequence of successive hitting times of a point u 0 ∈ S, the assertions of the lemma follow from the theory of Markov chains with finite state space.

Remark 3.5.
A crucial point is that we also obtain the independence of Y σ k from (U j , Y j ) 0≤j≤σ k −1 , thereby strengthening analogous results for invertible matrices, obtained in [6,28].
From now on, assume (A3c). We are going to prove that the chain (U n , Y n ) satisfies a minorization condition as in [7, Definition 2.2] resp. [30, (M)]. If v a0 ∈ S ≥ is the Perron-Frobenius eigenvalue of the matrix a 0 from (A3), then we have the following result: i.e. τ /l is stochastically bounded by a random variable with geometric distribution.
Source: This is proved in [23, p.218-220, proof of I.1], the crucial point being that v a0 is a strict contraction on S ≥ with attractive fixed point v a0 , and small perturbations of a 0 still attract to a neighborhood of v a0 , and such matrices are realized with positive probability.
Step 3. Introduce the finite measurẽ Combining Steps 1 and 2 and Assumption (A3c), there is δ > 0, such that for all u ∈ B δ (v a0 ) there exists an . Hence for all u ∈ B δ (v a0 ), by Assumption (A3c) and using that l d×d is invariant under transformations by a matrix with determinant 1 (see [28, proof of Prop. 15.2, Step 1] for more details, using the Kronecker product) Step 4: To obtain a minorization for the shifted measure P α u , recall that H α is bounded from below and above, to obtain that Upon renormalizing η to a probability measure, and thereby determining γ, we obtain the assertion. The regeneration times σ n are constructed as follows: Let (ξ n ) n≥0 be a sequence of i.i.d. Bernoulli(1,γ) random variables, independent of (U n , Y n ) n≥0 . Whenever (U n , Y n ) enters the set R, (U n+1 , Y n+1 ) is generated according to η if ξ n = 1, and according to (1 − γ) −1 (P − γη) if ξ = 0. The total transition probability thus remains P = P α u ((U 1 , Y 1 ) ∈ ·). Together with Lemma 3.6, this construction immediately gives that σ 1 can be bounded stochastically by a random variable with geometric distribution.

Regular Variation of Fixed Points
In this section, we show that every fixed point of S, the existence of which is provided by Proposition 2.2, satisfies the regular variation property (2.9).
Let ψ be the Laplace transform of a fixed point of S in the critical case m ′ (α) = 0. Introduce Our aim is to study behavior of D as t goes to infinity. Let u 0 be given by Corollary 3.3. Following the approach in [25], we are going to show that converges to 1 as t tends to infinity. This shows in particular, that D(u 0 , t) is slowly varying as t → ∞. We then use the results of [27] to deduce that this already implies that D(u, t) is slowly varying for all u ∈ S ≥ . Lemma 4.1. For every sequence (t k ) k∈N , tending to infinity, there is a subsequence (t n ) n∈N such that h tn (u, s) converges pointwise to a continuous function h : Since ψ is a Laplace transform and t is fixed, it follows (using the multivariate version of the Bernstein theorem, [13, Theorem 4.2.1]), that the derivative of f t is completely monotone in the multivariate sense, and hence, is the Laplace transform of a probability measure on R d ≥ , due to [19,Criterion XIII.4.2]. Note ϕ t (0) = 1, while the limit as |x| → ∞ may be positive, so the corresponding probability measure might have some mass in zero.
Since the set of probability measures is vaguely compact, we deduce that for any sequence t k , tending to infinity, there is a subsequence t n such that ϕ tn converges pointwise to the Laplace transform ϕ of a (sub-)probability measure on R d ≥ , which is continuous except for maybe in 0. Since ϕ tn (u 0 ) = e −1 > 0 for all n, it follows that ϕ > 0 on R d ≥ , and hence, we obtain that lim where the function h is continuous on R × S ≥ .
Lemma 4.2. Let t n be a sequence such that h tn converges to a limit h. Then h is superharmonic for (U n , V n ) under P α u , i.e. h(u, s) ≥ E α u h(U 1 , s + S 1 ). Proof. Using Eq. (2.1) and a telescoping sum, we obtain (since ψ is a fixed point), Now consider the subsequential limit t n → ∞, then the LHS converges by assumption to h, while for the RHS, we use Fatou's lemma and observe that the product tends to 1, so that we obtain: Lemma 4.3. The (subsequential limit) function h is constant and equal to 1 on supp π α * × R.
Proof. It follows from Lemma 4.2 that h(U n , s + S n ) is a nonnegative supermartingale, which hence converges a.s. as n → ∞. Now assume that h(u, s) = h(w, t) for u, w ∈ supp π α * and s, t ∈ R. Since m ′ (α) = 0, (U n , S n ) under P α u0 is a recurrent Markov Random Walk by Lemma 3.2, thus it visits every neighborhood of (u, s) resp. (w, t) infinitely often. But then, due to the a.s. convergence of h(U n , s + S n ) and the continuity of h, we infer that h has to be constant. Since furthermore h(u 0 , 0) = 1, the assertion follows.
Remark 4.4. Note that here (via Lemma 3.2) the aperiodicity condition enters. It is not needed if α = 1, because then h itself is a multivariate Laplace transform, which is in particular monotone. Then using again the a.s. convergence of h(U n , s + S n ) together with the fact that S n oscillates (see Eq. (3.2)) shows that h has to be constant.

Lemma 4.5. It holds that
and the convergence is uniform on compact subsets of S ≥ × R. In particular, the positive function is slowly varying at 0, and Proof. Combining Lemmata 4.2 and 4.3, we obtain that for every sequence t k → ∞ there is a subsequence t n → ∞ such that for each s ∈ R, tn u 0 )) .
Since all subsequential limits are the same, we infer that lim t→∞ h t (u 0 , s) = 1 for all s ∈ R, which in particular proves the slow variation assertion about the function L(r), for L(sr)/L(r) = h − log r (u 0 , − log s). Using the estimate i.e., ψ is L-α-regular in the sense of [

Uniqueness of Fixed Points
In this section, we are going to finish the proof of Theorem 2.4. Therefore, we show that the slowly varying function appearing in (2.9) is essentially unique, and that this property then identifies the fixed points. The approach is the multivariate analogue of [9,Theorem 8.6].
We start with the following lemma, the proof of which we postpone to the end of this section for a better stream of arguments.  For u ∈ S ≥ , we can introduce for t ∈ R the homogeneous stopping line Since max |v|=n L(v) → 0 P-a.s. by Lemma 5.1, this stopping line is finite P-a.s. and intersects the whole tree (is dissecting).
Let ψ be a fixed point of S. Define By Eq. (2.1), this constitutes a bounded martingale w.r.t. B n for every x and we call its P-a.s. limit M (x) ∈ [0, ∞) the disintegration of the fixed point ψ. Setting the martingale property together with boundedness implies that ψ(x) = E exp(−Z(x)) for all x ∈ R d ≥ . Following the proof of [5,Lemma 4.1], one can show that M (·, ω) is a Laplace transform for P-a.e. ω ∈ Ω, and that M is jointly measurable on S ≥ × Ω. This implies the same for Z. (1) lim n→∞ |v|=n F (L(v) ⊤ x)(1 − ψ(L(v) ⊤ x)) = γZ(x) P-a.s.
Reasoning as in the proof of [17, Theorem 2.7, Step 6], we see that for any nontrivial fixed point X of S, P (X = 0) = 0, and consequently Z(u) > 0 P-a.s.. On the other hand, since ψ is the Laplace transform of a random variable on R d ≥ , Z(u) < ∞ P-a.s.
The subsequent lemma is where we use assumption (A6). Using the definition of µ, it implies that with c ′ := − log c In other words, the increments of S(vi) − S(v) are P-a.s. bounded by c ′ .
Proof. By Lemma 4.5, lim t→∞ 1 − ψ(e −s−t y) H α (y)e −α(s+t) L(e −t ) = 1, and the convergence is uniform on compact sets for (y, s). In particular, it is uniform on the set S ≥ × [0, c ′ ]. Now applying this result with s = S u (v) − t and y = U u (v) with v ∈ I u t and using that 0 by Assumption (A6), we deduce from Proposition 5.2, (5) that Remark 5.4. The idea of this proof follows that of [9,Theorem 8.6]. There an assumption similar to (A6) is avoided by using the theory of general branching processes, see [22,29]. A similar approach is taken in [27] in the non-critical case, a crucial ingredient of which is an application of Kesten's renewal theorem [24,Theorem 1]. In the critical case, a variant of Kesten's renewal theorem for driftless Markov random walks, or a strong theory of Wiener-Hopf factorization seems to be needed in order to proceed along similar lines. Now we are ready to prove our main result.
Step 2: Let now ψ 2 be the Laplace transform of a different nontrivial fixed point, with corresponding disintegration M 2 and Z 2 , and slowly varying function L 2 , defined by (4.3), using the same u 0 as before. Recall that Z(u) and Z 2 (u) are P-a.s. positive and finite by by Proposition 5.2, (4) for each u ∈ S ≥ . Then we have by Lemma 5.3 that P-a.s., First, fixing u ∈ S ≥ , this proves that the limit of the right hand side exists and equals some K ∈ (0, ∞). Then, using the equation again for general u, we obtain Z 2 (u) = KZ(u) P-a.s.. Consequently, which proves Eq. 2.8.
Step 3: Fix L to be the slowly varying function corresponding to ψ. Then Eq. (2.9) follows from Eq. (4.4) for this particular ψ, and moreover, The final assertion about lim sup r→0 L(r) will be proved in Lemma 5.6. Proof. Since W (u) as the limit of a nonnegative martingale is again nonnegative, it suffices to show that EW (u) = 0. It even suffices to show that EW (u 0 ) = 0 for one u 0 ∈ int(S ≥ ), for due to nonnegativity It is shown in [11, Theorem 2.1 (iii)], that EW (u 0 ) = 0 follows from lim sup n→∞ H α (U n )e αSn = ∞ P α u0 -a.s. But the latter is a direct consequence of (3.5), together with the strict positivity of H α .

Determining the Slowly Varying Function
In this section, we work under one of the additional assumptions (A3c) or (A3f), together with (A7). We want to identify the slowly varying function L, which was (given a nontrivial fixed point ψ and a reference point u 0 ∈ supp ν * α ) defined in Eq. (4.3) to be We are going to show that which gives that lim r→0 L(r)/ |log r| = K ′ , i.e. we may choose the slowly varying function to be a scalar multiple of |log r| ∨ 1.
The basic idea to prove Eq. (6.1) comes from [18] and is by using a renewal equation satisfied by (the onedimensional analogue of) D(u 0 , t). In the present multivariate situation, we obtain a Markov renewal equation for a drift-less Markov random walk. By a clever application of the regeneration lemma, we can reduce this again to a (one-dimensional) renewal equation for a drift-less random walk, for which enough theory is known to solve it.

The Renewal Equation.
In this subsection we present the Markov renewal equation for D(u, t) and show how, using Lemma 3.4, it can be replaced by a one-dimensional renewal equation.
From now on, assume that the assumptions of the Regeneration Lemma, Lemma 3.4 are satisfied, i.e. there is a sequence of stopping times (σ n ) n∈N and a probability measure η on S ≥ × R such that in particular (3.4) holds.
For any nonnegative measurable function F on S ≥ × R we defineF : R → R bŷ Moreover, under each P α u , let (V n ) n∈N be a zero-delayed random walk with increment distribution P α η (S σ1−1 ∈ ·), independent of all other occurring random variables. Note that V n is a drift-less random walk. Lemma 6.3. For any nonnegative measurable function F on S ≥ × R and k ≥ 0, the following equation holds Proof. We prove by induction. By the definition ofF , the equation holds for k = 0. Suppose now that it holds for some k ≥ 0. Then where (3.4) from Lemma 3.4 is used in the second equality and we denote by (U ′ n , S ′ n ), V k an independent copy of (U n , S n ), V k with corresponding expectation E α η ′ . Now we can formulate the univariate renewal equation, corresponding to Eq. (6.2).
Proof. Let Since (U n , S n ) is a Markov chain, the Markov renewal equation (6.2) implies that M n is a P α u -martingale (with respect to the filtration G n ) for each u ∈ supp ν * α . Since τ = σ 1 − 1 is a stopping time by (3.4), the optional stopping theorem implies that Equating the right hand sides of (6.6) and (6.7) and integrating with respect to η, we obtain

Solving the Renewal Equation.
In this subsection, we will show that lim t→∞ D(u 0 , t)/t = 1. Before we can use the renewal equation, we first have to consider some technicalities, e.g. direct Riemann integrability of g. We start by considering moments of V 1 .
Proof. We proof the boundedness of E α η e −δV1 and E α η e δV1 separately, starting with the first one. Property (R4) implies that there exists δ 0 such that sup u E α u e δ0(σ−1) < ∞. Due to Assumption (A7), there is ε > 0 such that m(α + ε) ≤ e δ0 . Observe that there is C ε < ∞ such that e −εSn m(α + ε) n , and the right hand side is a martingale under P α u with expectation C ε due to Proposition 2.3. Therefore, the optional stopping theorem and the Fatou lemma imply The choice of ε gives us sup u E α u m(α + ε) σ−1 < ∞, hence by the Cauchy-Schwartz inequality, For the second part recall that assumption (A6) implies that the increments of S n are bounded from above by − log c. Therefore, Integrating with respect to η finishes the proof.
Before proving that g(t) is dRi, we need the following consequence of the slow variation of D(u 0 , t) (for t → ∞).
Lemma 6.7. Assume in addition (A6) and (A7). Then the function g(x) is nonnegative and directly Riemann integrable.
Proof. Referring to Lemma 6.2, G is nonnegative and t → e −αt G(t) is decreasing, hence the same holds for g. For such functions, a sufficient condition for direct Riemann integrability is that g ∈ L 1 (R), see [20, Lemma 9.1]. Since moreover, by Lemma 3.4, E σ 1 < ∞, it suffices to show the integrability of g * : t → sup u∈S ≥ G(u, t).
Using Lemma 6.6, boundedness of H α and fact that h(x) is increasing, comparable with min(x, x 2 ) on the positive half line, the later can be bounded by Proof. Recalling the definition of h t from Section 4, we have that Using Lemma 4.5, lim t→∞ h t ≡ 1. Lemmata 6.5 and 6.6 allow us to apply the dominated convergence theorem to obtain the assertion. Now we can identify the slowly varying function. Theorem 6.9. Assume that a functionD, such thatD(t + s)/D(t) → 1 satisfies renewal equation (6.5) with a directly Riemann integrable function g and a nonarithmetic random variable V 1 such that E α η e δ|V1| < ∞ for some positive δ. Then lim t→∞D (t)/t exists and it is positive.
Source: The proof is almost the same as the proof of Theorem 2.18 in [18]. Note that, although in [18] the derivative ofD is used this can be easily avoided.

The Derivative Martingale
In this section, we finish the proof of Theorem 2.5, by proving the convergence of W n (u) = |v|=n [S(v)+b(U (v))] H α (U (v))e −αS(v) to a nontrivial limit, which constitutes the exponent of fixed points. The assertions of Theorem 2.5 are contained in the Theorem below, except for the identification of the slowly varying function, which was given in Section 6, in particular in Theorem 6.9.   Therefore, we can replace S u (v) by S u (v) + b(U u (v)), and obtain lim n→∞ |v|=n S u (v) + b(U u (v)) H α (U u (v))e −αS u (v) = K ′ Z(u) P-a.s.
This shows the P-a.s. convergence of W n (u) to W(u) := K ′ Z(u). Then P (W(u) > 0) = 1 by (4) of Proposition 5.2. That ψ(ru) = E e −r α W(u) is a fixed point follows immediately, since E e −r α K ′ Z(u) is a fixed point for any K ′ > 0.