Absolute Continuity of Complex Martingales and of Solutions to Complex Smoothing Equations

Let $X$ be a $\mathbb{C}$-valued random variable with the property that $$X \ \text{ has the same law as }\ \sum_{j\ge1} T_j X_j$$ where $X_j$ are i.i.d.\ copies of $X$, which are independent of the (given) $\mathbb{C}$-valued random variables $ (T_j)_{j\ge1}$. We provide a simple criterion for the absolute continuity of the law of $X$ that requires, besides the known conditions for the existence of $X$, only finiteness of the first and second moment of $N$ - the number of nonzero weights $T_j$. Our criterion applies in particular to Biggins' martingale with complex parameter.


Introduction
In a variety of models coming from theoretical computer science, applied probability, economics or statistical physics, quantities of interest exhibit asymptotic fluctuations that do not have a normal or α-stable distribution. In many cases, the limiting law µ can be characterized as a fixed point of a mapping S of the form where X j are i.i.d. complex-valued random variables with law µ and independent of the given complex variables (T j ) j≥1 . See [10] and references therein for a list of examples.
Let X be a complex random variable with law µ such that S(µ) = µ. Then This gives rise as well to an equation for the characteristic function φ(ξ) = E e −i ξ,X 1 , namely The set of all solutions to S(µ) = µ has been described in [10]  |T j | α log |T j | ∈ (−∞, 0) and E W 1 log + W 1 < ∞.
Let U ⊂ C be the smallest closed multiplicative subgroup generated by the support of (T j ) j≥1 . Suppose that (A1)-(A3) hold with α = 1 (in the case α = 1, an additional technical assumption is required). Then, by [10,Theorem 1.2], there exists a nonnegative random variable W with unit mean and a C-valued random variable Z such that if the law of X is a fixed point of S, then X where x ∈ C and (Y t ) t≥0 is a complex-valued Lévy process with the invariance property and (Y t ) t≥0 is independent of (W, Z). Note that Y t ≡ 0 is a valid choice. If (Y t ) t≥0 is nontrivial, it holds E |Y W | α = ∞, see [10, Remark 1.4]

Martingales and the Weighted Branching Process
To give a description of W and Z, let us define a weighted branching process as follows: Let V = ∞ n=0 N n denote the infinite tree with Harris-Ulam labelling and root ∅. For each v ∈ V, we denote by |v| its generation. To each v ∈ V, we attach an independent copy (T 1 (v), T 2 (v), . . . ) of (T j ) j≥1 and define the weighted branching process by Here, (A2) implies that W n is a martingale and (A3) guarantees its convergence in L 1 by Biggins' theorem. Z = 0 unless E N j=1 T j = 1 and α ≥ 1. If these requirements are satisfied, then Z n := |v|=n L(v) defines a C-valued martingale with mean one. In our results, we will require that lim n→∞ Z n exists a.s. and in L 1 (Z1) We have Z := lim n→∞ Z n , if (Z1) holds, and Z := 0 otherwise. Under (A1)-(A2), a sufficient condition for Z n to converge a.s. and in L p for all p < α is α ∈ (1, 2) and

Results
We study the absolute continuity of Z. By the discussion above, we may focus on the case 1 < α < 2. We further assume that P(N = 0) = 0, since otherwise all solutions have an atom in zero.
Then the law of Z is absolutely continuous. If supp(Z) ⊂ R, then (C1) can be replaced with the assumption E[N ] < ∞. This applies in particular to (1) with real valued T j .
As mentioned before, (A4) is a mild sufficient condition for (Z1). If higher order moment conditions on Z and N are satisfied, one can prove further smoothness properties of the Fourier transform of Z, see Remark 3.6.
Concerning (Y t ) t≥0 , standard arguments yield the following continuity result: is a nondegenerate complex-valued Lévy process satisfying (5) and that there is no U -invariant R-linear subspace of C. Then for each t > 0, the law of Y t is absolutely continuous.
Combining both results and using that (Y t ) t≥0 is independent of (W, Z) in the representation (4), we have: Then the law of any nontrivial solution to (2) is absolutely continouous.

Examples Biggins' martingale with complex parameter
A branching random walk is defined as follows. An ancestor at the origin produces offspring which is displaced on R according to a point process. Each new particle then produces again offspring independently of all other particles according to the same law. Denote the positions of the n-th generation particles by (S(v)) |v|=n and suppose that for exists and is nonzero. Then defines a C-valued martingale that coincides with Z n upon identifying These complex martingales were studied in [1] to analyze the frequencies of particles with a certain speed in the branching random walk. Let us consider a simple branching random walk with binary branching, i.e., S(1), For given values of λ, the assumptions of Theorem 2.1 are readily checked. Figures 1,2 show estimates of the density of W for different values of λ, based on the simulation algorithm proposed in [4]. Sample size n = 10 6 , 10 2 simulation steps.

Cyclic Pólya urns
A cyclic Pólya urn consists of balls of b different types. Each time a ball of type m is drawn, it is placed back into the urn together with a ball of type m + 1 mod b. If b ≥ 7, the asymptotic fluctuations of the proportion of balls of a given type are described in terms of a complex random variable X with finite variance that satisfies where ζ = ω b and X 1 , X 2 are i.i.d. copies of X which are independent of U , which is a uniform [0, 1]-random variable; see e.g. [5].
We show how our result applies. Assumptions (A1)-(A4) and (Z1) are readily checked, α = 1/ (ζ) ∈ (1, 2) as soon as b ≥ 7. Since the solution of interest has a second moment, it has to be X = xZ for some x ∈ C. The set Z := supp(Z) has to satisfy which yields that Z R. Hence Theorem 2.1 applies and shows that X has a density. Figure 3 shows estimates of the density for different values of b, again based on the simulation algorithm proposed in [4]. Sample size n = 10 5 , 10 2 simulation steps.

Proofs
We start with the short proof of Proposition 2.2.
Let X be a random vector in R d with characteristic function φ. Then (the law of) X is called full, if for all v = 0 in R d , v, X is not a point mass. A complex-valued random variable X is full, if it is full upon identifying C R 2 . If X is full, then there is > 0 such that |φ(ξ)| < 1 for all 0 < |ξ| < , see [9,Lemma 1.3.15].
Proof of Proposition 2.2. If there is no U -invariant linear subspace, then the invariance property (5) yields that the support of Y t is also not contained in a proper linear subspace of C, hence Y t is full. By (A1) and (A2), the function m is not constant, hence there is u ∈ U with |u| = 1. Then, using that Y t is infinitely divisible, Eq. (5) yields that Y t is operator semistable (see [  Proof. Up to obvious modifications, this can be proved along the same lines as Thm. 2 in [2]. In the following, we restrict our attention to the case where Z is properly C-valued, i.e., supp(Z) R. The simpler case supp(Z) ⊂ R requires only minor modifications.

Derivatives of the characteristic function
To proceed further, we will consider the complex derivatives ∂ξφ(ξ) and ∂ ξ φ(ξ). Note that φ is differentiable as soon as E |Z| < ∞. One has to be careful, because in the definition of φ, the identification C = R 2 and the real inner product is used. We write ξ = ξ 1 + iξ 2 and The characteristic function φ is given by because ∂ ξ (ξz) = z, ∂ξ(ξz) = 0 for z ∈ C. Therefore, by the chain rule for complex differentiation (using Wirtinger derivatives) As the first step, we are going to prove decay rates for both derivatives.
Proof. We will prove the estimate for ∂ξφ. The proof for ∂ ξ φ is completely analogous, up to replacing T j byT j . Define g(ξ) := ∂ξφ(ξ). Then, differentiating both sides of Eq.
Proof. As before, we focus on g(ξ) = ∂ξφ(ξ). By taking squares in Eq. (10) and applying Jensen's inequality to the discrete probability measure N j=1 1 N δ{|T j ||g(T j ξ)|}, we obtain and this estimate is valid for all ξ with |ξ| ≥ t δ −1 . Using the decay properties of g provided by Lemma 3.3, we have that the right hand side in (14) is bounded by which is finite due to (C1). Defining Now choose and δ small such that This is possible since N δ → N a.s. for δ → 0, P(N > 1) > 0 and E N 2 < ∞. Recall that by assumption. The remainder of the proof relies on the following claim.
Claim: For all m ∈ N, where C < ∞ is the constant factor in the growth rate of g.
If the claim holds, then I(K) ≤ I(U ) 1−γ < ∞ for all K, which proves that g : C → C is in L 2 .
Proof of the Claim: We proceed by induction over m ∈ N. For N = 0, the estimate on the growth rate of g, provided by Lemma 3.3, gives (by possible enlarging U ) Note that we are integrating over C, which is a two-dimensional R-space.
Suppose the claim holds for m ∈ N. This means I(K) ≤ a + b log + K with the values a = m n=0 γ n I(U ) + mγ m−1 βC , b = γ m C. Using Eq. (15) to iterate, we obtain γ n I(U ) + mγ m βC + γ m+1 C log + K + βγ m C which proves the claim.
Now we are in a position to prove Theorem 2.1.
This is indeed sufficient to proceed as in [8,Lemma 3.2] to conclude that |h(ξ)| = O(|ξ −2 ). This estimate can then be used in a similar way to produce bounds for ∂ ξ φ(ξ), and so on.