Convergence of complex martingales in the branching random walk: the boundary

Biggins [Uniform convergence of martingales in the branching random walk. {\em Ann. Probab.}, 20(1):137--151, 1992] proved local uniform convergence of additive martingales in $d$-dimensional supercritical branching random walks at complex parameters $\lambda$ from an open set $\Lambda \subseteq \mathbb{C}^d$. We investigate the martingales corresponding to parameters from the boundary $\partial \Lambda$ of $\Lambda$. The boundary can be decomposed into several parts. There may be a part of the boundary, on which the martingales do not exist, on other parts it exists, but diverges or vanishes in the limit. In the remaining part, there is convergence to a non-degenerate limit. The arguments that give this convergence also apply in $\Lambda$ and require weaker moment assumptions than the ones used by Biggins.


Introduction
Biggins [8] proved local uniform convergence of additive martingales in a supercritical branching random walk on R d at complex parameters within a certain open set Λ ⊆ C d .
He used the results obtained to derive a local large deviation result for the point process of the positions in the nth generation as n → ∞.
In some situations, the arguments from [8] cover parts of the boundary ∂Λ of Λ, but typically only a proper, possibly empty, subset of ∂Λ. However, the ideas and results required to deal with the boundary are available in the literature, but spread over different papers [1,9,11] and not directly applicable. In this paper, we gather these techniques and results and provide a complete treatment (up to mild moment assumptions) of the convergence of additive martingales on the boundary ∂Λ.
Besides its value in the study of large deviation results for the branching random walk and its intrinsic interest, there is further motivation to study the convergence of additive martingales at complex parameters, particularly on the boundary ∂Λ.
First, in the recent applied probability literature, there are several examples of limit theorems, in which the limiting behavior of a quantity of interest is described by the solution to a complex smoothing equation. This solution can always be chosen as the limit of a suitable additive martingale at a complex parameter, see [17] for a discussion and a collection of examples including fragmentation processes and Pólya urns. Understanding the convergence of additive martingales at the boundary ∂Λ is essential for the study of critical smoothing equations. This is our major motivation for writing the note at hand.

Main results
Model description. We consider a branching random walk in R d where d ∈ N = {1, 2, . . .}. The process starts with an initial ancestor at the origin. The ancestor forms generation 0 of the process and produces offspring placed on R d at the points of a point process Z = N j=1 δ Xj with intensity measure µ. The children of the ancestor form the first generation of the process. Each member of the first generation has children with positions relative to their parent's position given by an independent copy of Z, and so on. We suppose that the branching random walk is supercritical, that is, µ(R d ) = E[N ] > 1.
The ancestor is identified with the empty tuple ∅ and its position is S(∅) = 0. On some probability space (Ω, A, P), let (Z(u)) u∈I be a family of i.i.d. copies of Z. For ease of notation, we assume Z(∅) = Z. We write Z(u) = u ∈ I. Then G 0 := {∅} is generation 0 of the process and, recursively, G n+1 := {uj ∈ N n+1 : u ∈ G n and 1 ≤ j ≤ N (u)} is generation n + 1 of the process, n ∈ N 0 := N ∪ {0}. Define the set of all individuals by G := n∈N0 G n . We write |u| = n for u ∈ G n and |u| < n if u ∈ G k for some k < n. The position of an individual u = u 1 . . . u n ∈ G n is given by S(u) := X u1 (∅) + . . . + X un (u 1 . . . u n−1 ).
The point process of the nth generation people will be denoted by Z n , that is, The sequence of point processes (Z n ) n∈N0 is then called branching random walk.
The multivariate Laplace transform m of µ is denoted by where λ ∈ C d and λ = θ + iη with θ, η ∈ R d . (We adopt the convention from [8] and always write θ for Re(λ) and η for Im(λ).) We are only interested in those λ for which m(λ) is well-defined, i. e., λ from the set Throughout, we assume int D = ∅. Let F 0 be the trivial σ-field and, for n ∈ N, Then, for λ ∈ D with m(λ) = 0, the family Z n (λ) = m(λ) −n |u|=n e −λS(u) , n ∈ N 0 forms a complex martingale with respect to (F n ) n∈N0 .
We introduce a weaker form of (2.2), namely, and, additionally, the following moment condition: we have from the first condition in (C1) with α = 1 that m(θ) = |m(λ)|. Hence, Z 1 (λ) = Z 1 (θ) almost surely. Consequently, (Z n (λ)) n≥0 is a real martingale for λ ∈ ∂Λ 1 . Whether or not the additive martingale in the branching random walk converges in the real case is known from Biggins' martingale convergence theorem [3,7,13]. We therefore omit In most situations, it will hold that i.e., the sets defined above exhaust ∂Λ. There is a discussion including a set of (mild) conditions that ensure (2.3) to hold in Section 3 below.
Main theorems. To unburden the notation, we fix λ ∈ D and set L(u) := m(λ) −n e −λS(u) if u ∈ G n for some n ∈ N 0 , and L(u) := 0, otherwise. We write Z n for Z n (λ), n ∈ N and Z for Z(λ) if the latter exists. By construction, (Z n ) n≥0 is a complex martingale with E[Z 1 ] = 1. Convergence of complex martingales in the BRW To avoid trivialities, we assume that P(Z 1 = 1) < 1. Condition (C1) in the simplified notation becomes Condition (C2) in the simplified notation reads Sometimes, we will refer to the following condition: We further define W n := |u|=n |L(u)| α , n ∈ N 0 . Then, by (C1), (W n ) n≥0 is a nonnegative martingale.  [3,7,13].
The following theorem is the main result of the paper. It gives convergence of the additive martingales to non-degenerate limits on ∂Λ (1,2) .
converges almost surely and in L p for every p < α to a non-degenerate limit Z.
The following propositions are essentially contained in [11] and provide sufficient conditions for the divergence of the additive martingales on ∂Λ 2 and ∂Λ 3 , respectively.
Proposition 2.2. Suppose that P(N < ∞) = 1 and that (C1) holds with α = 2. Then each of the following two conditions is sufficient for (Z n ) n≥0 not to converge in probability.
does not converge in probability.

Remark 2.4.
In both propositions, we require P(N < ∞) = 1. This is because their proofs are based on arguments from [4,11] involving complex multiplicative martingales and convergence of triangular arrays. It may be possible, but certainly tedious, to extend those arguments to the case P(N = ∞) > 0. As we want to keep the presentation short and accessible, we refrain from trying to remove the assumption.
The rest of the paper is organized as follows. In Section 3, we give a brief discussion of the shape of Λ, its boundary and the parts in which the boundary can be divided.
We further give an example to illustrate our results. Section 4 contains the proofs of our results, while Section 5 contains extensions of the main results to a more general, multidimensional situation. Finally, there is an appendix comprising an auxiliary result required in the proof of Theorem 2.1.

Discussion and examples
It is illustrative to first consider examples. Convergence of complex martingales in the BRW Examples. We begin with an example which strongly reminiscent of the situation studied in [14]. We also refer to [12], where the problem of convergence on Λ (1,2) γ is studied in the different context of Gaussian multiplicative chaos.
Example 3.1 (The Gaussian case with binary splitting). Consider a branching random walk with independent standard Gaussian increments and binary splitting, i. e., Z = δ X1 + δ X2 where X 1 , X 2 are i.i.d. random variables with standard normal laws. Then m(λ) = 2 exp(λ 2 /2) for all λ ∈ C. For every θ ∈ R and every γ > 1, we have By symmetry, it suffices to consider θ, η ≥ 0 only. Next notice that sup{θ : λ ∈ Λ} = √ 2 log 2. For fixed θ ∈ [0, √ 2 log 2], making (3.1) explicit in η 2 gives: The right-hand side assumes its maximum (as a function of p ∈ (1, 2]) at p = ( In conclusion, we get the shape depicted in Figure 1 for Λ.  (1,2) and thus correspond to the case 1 < α < 2. Theorem 2.1 yields that there is convergence to a nontrivial limit on the blue lines. The red lines including the endpoints form ∂Λ 2 , which is dealt with in Proposition 2.2. Parts (a) and (b) of the proposition yield that there is no convergence on the red arcs without the endpoints and in the endpoints, respectively.
The lemma explains the choice of ∂Λ (1,2) . In the situation of the lemma, (C2) is automatically fulfilled if γ > α. If thus constitutes only a very mild additional moment assumption.

Proofs of the main results
Many-to-one lemma and auxiliary results for random walks. There is a wellknown simple formula with far-reaching implications that connects the branching random walk (Z n ) n∈N0 with an associated standard random walk (S n ) n∈N0 on R. This formula is sometimes called the many-to-one lemma and takes the following form here: for all nonnegative Borel-measurable functions f : R n+1 → R. The formula is used in many (possibly all) papers on branching random walks. We thus refrain from proving it here. We just mention an important consequence of (4.1), namely, choosing n = 1 and f (x, y) = y, whenever S 1 or |u|=1 |L(u)| α (− log(|L(u)|)) is quasi-integrable, we get > 0 be as in (C2) and choose φ as in Lemma A.1 with δ := 1 + /2. We extend φ to a function on C by letting φ(x + iy) : For t > 0, we write D In particular, lim t→∞ P(sup u∈G |L(u)| > t) = 0. Therefore, if we show that (Z (t) n ) n≥0 converges almost surely for every t > 0, then we infer that (Z n ) n≥0 converges almost surely to some finite limit Z.
To prove convergence of (Z (t) n ) n≥0 , we apply the Topchiȋ-Vatutin inequality for martingales [5, Theorem 1] twice (for the second application note that D (t) k conditional on F k−1 is a weighted sum of independent, centered and φ-integrable random variables) for all z, w ∈ C with |w| ≥ 1 and the change of measure (4.1), we get To see that the latter series is finite, let τ 0 := 0 and let τ n denote the nth strictly descending ladder epoch for the walk (S k ) k≥0 , n ∈ N. Notice that E[S 1 ] ≥ 0 by (4.2), hence τ n may be infinite with positive probability. Then, for any k ≥ 0, there exist unique (random) numbers n ∈ N and j ∈ N 0 such that τ n−1 ≤ k = τ n−1 + j < τ n . In this case, is a random walk drifting to +∞. Taken together, we infer that the expectation in (4.6) is finite. Again from the direct Riemann So far we have shown that there is a constant C > 0, not depending on t, such that for all t > 0. This implies that Z (t) n → Z (t) almost surely for some random variable Z (t) and, upon letting t → ∞, also Z n → Z almost surely for Z := lim t→∞ Z (t) . What is more, for all sufficiently large t. As (t) is of the order log 1+ /2 t as t → ∞, the bound above implies that (|Z n − 1| p ) n≥0 is uniformly integrable for all p < α. Consequently, Z n → Z in L p for all p < α. In particular, E[Z] = 1.
Proposition 2.2 and Proposition 2.3 can be proved using minor modifications of the corresponding results in [11]. For the reader's convenience, we sketch the corresponding arguments in the given context.
Both propositions are based on the following lemma.
The proof of the lemma is lengthy and follows along the lines of the proofs of [4,Lemma 4.9] and [11,Lemma 4.7]. We will therefore only give a sketch of the proof.
Sketch of the proof. First notice that if Z n → Z in probability as n → ∞, then Z satisfies Z = |u|=n L(u)[Z] u almost surely To be more precise, there exists a probability measure on [0, ∞), nondegenerate at 0, such that its Laplace transform ϕ satisfies Indeed, ϕ is the Laplace transform of a fixed point of a smoothing transformation on the nonnegative halfline with tilted weights |L(u)| α , |u| = 1. Further, ϕ is such that 1 − ϕ(t) is regularly varying of index 1 at 0. These facts are summarized in [2], see in particular Proposition 2.1 and Theorem 3.1 there. As in [11, Section 3.5], using multiplicative martingales and the theory of independent, infinitesimal triangular arrays, one can α). In particular, for any p ∈ (1, α), we have E[|Z| p ] < ∞ and thus, by standard martingale theory, Proposition 2.2 can be proved as Theorem 2.3 in [11]. We therefore keep the presentation short here.
Proof of Proposition 2.2. Suppose that P(N < ∞) = 1 and that (C1) holds with α = 2 and that one of the additional conditions holds. Further, assume for a contradiction that Z n → Z in probability as n → ∞. Then we can apply Lemma 4.1 and deduce that E[|Z| p ] < ∞ for every p ∈ (1, α). Standard martingale theory gives E[|Z n − Z| p ] → 0 as n → ∞ for each such p. On the other hand, from the Burkholder-Davis-Gundy inequality [10,Theorem 11.3.1] and Jensen's inequality for the concave function x → x p/2 for x ≥ 0, we get as in the proof of Theorem 2.3 in [11] that there exists a constant c p > 0 such that Here, using that given F k−1 , D k is a weighted sum of centered i.i.d. random variables, we can again apply the Burkholder-Davis-Gundy inequality and then Jensen's inequality (4.10) Condition (i) implies that W n → W in L 1 , see [13]. Hence the lower bound in (4.10) is of the order n p/2 which tends to +∞ as n → ∞. Condition (ii) implies that n p/4 W p/2 n , n ∈ N converges in distribution as n → ∞ to a non-degenerate limit and is also uniformly integrable, see [1, Theorem 1.1] and [11,Remark 4.8]. Thus the lower bound in (4.10) is of the order n p/4 and again diverges as n → ∞.
Proof of Proposition 2.3. The proposition follows from Lemma 4.1 via contraposition.

Results for higher dimensions
As already pointed out in the introduction, to a large extent, our interest in the problem of complex martingale convergence in the branching random walk comes from its significance in the fixed-point theory for smoothing transformations. As this theory has applications to problems that go (with regard to the dimension) beyond the complex case, we will explain how the results obtained above can be extended.
To be precise, fix a dimension d ∈ N and let S(d ) denote the set of real d × d similarity matrices. A similarity matrix is a matrix that can be written as the product of a positive scaling factor and an orthogonal d × d matrix. Now suppose that Z = where X 1 , X 2 , . . . are i.i.d. copies of X and independent of Z. An important problem arising when solving (5.1) is the following. Take independent copies Z(u), u ∈ V of Z on a suitable probability space (Ω, A, P) and define G n , G, F n in obvious analogy to the corresponding objects defined in Section 2. Define L(∅) to be the d × d identity matrix, and, for uj ∈ G, define recursively L(uj) := L(u)[L(j)] u . Now suppose that the matrix E[ |u|=1 L(u)] has finite entries only and that it has a right eigenvector 0 = w ∈ R d to the eigenvalue 1. Then the sequence (Z n w) n∈N0 defined via Z n w := |u|=n L(u)w, n ∈ N 0 defines a d-dimensional martingale with respect to (F n ) n≥0 . In slight abuse of common notation, we write | · | not only for the standard Euclidean norm in R d but also for the usual matrix norm. Since we only work with similarity matrices, this should cause no confusion. Then condition (C1) makes perfect sense in the given situation, and the following result can be proved along the lines of the proof of Theorem 2.1: Theorem 5.1. Suppose that (C1) holds with α ∈ (1, 2) and that (C2) holds with Z 1 replaced by Z 1 w. Then (Z n w) n≥0 converges almost surely and in L p for every p < α to a non-degenerate limit Z w .
This improves Proposition 1.1(c) in [17] in two ways. First of all, the assumptions on finite absolute moments of Z 1 w are relaxed. Second, the theorem above includes the boundary case m (α) = 0, which is not covered in [17].
Also, with W n := |u|=n |L(u)| α , n ∈ N 0 , the analog of Lemma 4.1 holds in the given context and thus allows to conclude the analogs of Propositions 2.2 and 2.3 with Z n replaced by Z w n , n ∈ N 0 . We refrain from reformulating the corresponding results in the more general context.
(ii) There exists a constant c > 0 such that (x) ∼ c log δ (x) as x → ∞.