The phase diagram of the complex branching Brownian motion energy model

We complete the analysis of the phase diagram of the complex branching Brownian motion energy model by studying Phases I, III and boundaries between all three phases (I-III) of this model. For the properly rescaled partition function, in Phase III and on the boundaries I/III and II/III, we prove a central limit theorem with a random variance. In Phase I and on the boundary I/II, we prove an a.s. and $L^1$ martingale convergence. All results are shown for any given correlation between the real and imaginary parts of the random energy.


INTRODUCTION
Random energy models (REM) suggested by Derrida [13,14] turned out to be a useful and instructive "playground" in the studies of strongly correlated random systems on large/high-dimensional state spaces, see, e.g., the recent reviews [29,23,11]. In this context, branching Brownian motion (BBM) viewed as a random energy model plays a special rôle. It turns out that BBM has correlations which are exactly at the borderline between the regime of weak correlations (REM universality class 1 ) and the one of strong correlations 2 . Apart from that, BBM is a particularly transparent representative for a whole class of models with similar (so-called logarithmic) correlation strength: Gaussian free field [31,9,10]; Gaussian multiplicative chaos/cascades [30,7]; characteristic polynomials of random matrices and number-theoretic models [17,3,4], cover times [8], etc.
In this paper, we focus on the complex-valued BBM energy model and show that this model lies exactly at the borderline of the complex REM universality class. This means that the phase diagram of the model is the same as in the complex REM, cf. Derrida [15] and [21]. However, the fluctuations of the partition function of this model are already influenced by the strong correlations and differ from those of the REM in all phases of the model, as we show in this work (and in [18]).
The motivation to consider the complex-valued setup is two-fold: (1) Critical phenomena. Lee and Yang [27] observed that phase transitions (= analyticity breaking of the log-partition function) occur at critical points due to the accumulation of complex zeros of the partition function (viewed as a function of the temperature) around the critical points on the real line, as the size of the system tends to infinity (= thermodynamic limit). (2) Quantum physics and interference phenomena. The formalism of quantum physics is based on the sums (and integrals) of complex exponentials which naturally leads to cancellations between the magnitudes of the summands in the partition function. This is a manifestation of the interference phenomenon, see, e.g., [16].
1.1. Branching Brownian motion. Before stating our results, let us briefly recall the construction of a BBM. Consider a canonical continuous branching process: a continuous time Galton-Watson (GW) process [6]. It starts with a single particle located at the origin at time zero. After an exponential time of parameter one, this particle splits into k ∈ Z + particles according to some probability distribution (p k ) k≥0 on Z + . Then, each of the newborn particles splits independently at independent exponential (parameter 1) times again according to the same (p k ) k≥0 , and so on. We assume that ∞ k=1 p k = 1. 3 In addition, we assume that ∞ k=1 kp k = 2 (i.e., the expected number of children per particle equals two). Finally, we assume that K := ∞ k=1 k(k − 1)p k < ∞ (finite second moment). At time t = 0, the GW process starts with just one particle.
For given t ≥ 0, we label the particles of the process as i 1 (t), . . . , i n(t) (t), where n(t) is the total number of particles at time t. Note that under the above assumptions, we have E [n(t)] = e t . For s ≤ t, we denote by i k (s, t) the unique ancestor of particle i k (t) at time s. In general, there will be several indices k, l such that i k (s, t) = i l (s, t). For s, r ≤ t, define the time of the most recent common ancestor of particles i k (r, t) and i l (s, t) as (1.1) For t ≥ 0, the collection of all ancestors naturally induces the random tree called the GW tree up to time t. We denote by F Tt the σ-algebra generated by the GW process up to time t. In addition to the genealogical structure, the particles get a position in R. Specifically, the first particle starts at the origin at time zero and performs Brownian motion until the first time when the GW process branches. After branching, each new-born particle independently performs Brownian motion (started at the branching location) until their respective next branching times, and so on. We denote the positions of the n(t) particles at time t ≥ 0 by x 1 (t), . . . , x n(t) (t).
We define BBM as a family of Gaussian processes, indexed by time horizon t ≥ 0. Note that conditionally on the underlying GW tree these Gaussian processes have the following covariance In what follows, to lighten the notation, we will simply write x i (s) := x i (s, t), i ≤ n(t), s ≤ t hoping that this will not cause confusion about the parameter t ≥ 0. 3 This implies that p 0 = 0, so none of the particles ever dies.

1.2.
A model of complex-valued random energies. In this section, we introduce the complex BBM random energy model.
Let ρ ∈ [−1, 1]. For any t ∈ R + , let X(t) := (x k (t)) k≤n(t) and Y (t) := (y k (t)) k≤n(t) be two BBMs with the same underlying GW tree such that, for k ≤ n(t), Cov(x k (t), y k (t)) = |ρ|t. (1.5) Note that where " D =" denotes equality in distribution and Z(t) := (z i (t)) i≤n(t) is a branching Brownian motion independent from X(t) and with the same underlying GW process. Representation (1.6) allows us to handle arbitrary correlations by decomposing the process Y into a part independent from X and a fully correlated one.
We define the partition function for the complex BBM energy model with correlation ρ at inverse temperature β := σ + iτ ∈ C by (1.7) 1.3. Notation. By L[·], L[· | ·], and =⇒ or wlim, we denote the law, conditional law, and weak convergence respectively. 1.4. Main results. Let us specify the three domains depicted on Figure 1 analytically: Remark. Some of our results will be stated under the binary branching assumption (i.e., p k = 0 for all k > 2). Existence of all moments for the number of children of a given particle would also suffice for all our proofs and will not require essential changes.
Our first result states that the complex BBM energy model indeed has the phase diagram depicted on Figure 1.
For any ρ ∈ [−1, 1], and any β ∈ C, the complex BBM energy model with binary branching has the same log-partition function and the phase diagram (cf., Figure 1) as the complex REM, i.e., in probability.

See Section 5 for a proof.
Remark. It is conjectured that the convergence in 1.9 also holds in L 1 , see [21, Theorem 2.15] for a related result for the REM.

1.5.
A class of martingales. In the centre of our analysis are the following martingales , M σ,0 (t) coincides with the McKean martingale introduced in [12], where it was proven that these martingales converge almost surely and in L 1 to a non-degenerate limit.
The next theorem states that for β ∈ B 1 the martingales M σ,τ (t) are in L p for some p > 1. exists a.s., in L 1 , and is non-degenerate.
See Section 2 for a proof.
σ,τ (t − r) are i.i.d. copies of M σ,τ (t) and a k (r) ∈ C are some complex weights independent from M (k) σ,τ (t − r), k ∈ N. If a limit M σ,τ as t ↑ ∞ of M σ,τ (t + r) exists, then it would have to satisfy the equation σ,τ are i.i.d. copies of M σ,τ . This type of equation is called complex smoothing transform. A detailed study on how solutions to such equations with complex weights look like was recently done by Meiners and Mentemeier [28], see also the recent paper by Kolesko and Meiners [24]. The case of real-valued scalar weights was treated by Alsmeier and Meiners [2] and by Iksanov and Meiners [20].
The following three results cover the strip |σ| < 1/ √ 2 and basically are "central limit theorems" (CLTs) with random variance.
See Section 2 for a proof.

Remark.
A result resembling Theorem 1.4 (i) was obtained by Iksanov and Kabluchko in [19] for β ∈ R.
Remark. The appearance of the random variance in Theorem 1.4 (and in the following ones) is in sharp contrast with the REM [21] and generalized REM [22], where CLTs with deterministic variance hold for β in the strip |σ| < 1/ √ 2.
and binary branching, where C 2 > 0 is some constant.

See Section 3.3 for a proof.
A similar result also holds on the boundary between phases B 1 and B 3 , i.e., on the set 1], and binary branching,

See Section 4.1 for a proof.
Recall that the behaviour of the partition function at β = √ 2 is determined by the martingale M 1,0 (t), which is related to another martingale -the so-called derivative martingale Z(t): (1.20) Lalley and Sellke proved in [26] that Z(t) converges a.s. as t → ∞ to a non-trivial limit Z which is a positive and a.s. finite random variable. At the boundary, including the triple point after appropriate rescaling, we have the following CLT with random variance.
and assume binary branching. Then: See Section 4.3 for a proof.
1.6. Organization of the rest of the paper. The remainder of the paper is organized as follows. In Section 2, we prove Theorems 1.2 and 1.4 concerning the behaviour of the partition function in Phase B 1 . In Section 3, we treat Phase B 3 . We start with a second moment computation which is then in the next subsection generalised to a constrained higher moment computation. Finally, in Section 3.3, we prove Theorem 1.5. The boundaries B 1,3 , B 2,3 (Theorems 1.6 and 1.7) are proved in Section 4. Section 5 contains the proof of Theorem 1.1.

PROOF OF RESULTS FOR PHASE B 1
We start by proving the martingale convergence of M σ,τ (t).
Proof of Theorem 1.2. One readily checks that M σ,τ (t) is a martingale with expectation 1. Next, we compute the where we used Representation (1.6). The right-hand side of (2.1) is equal to By Jensen's inequality for the conditional expectations, and because 1/ By the branching property, where k and j are two BBM particles at time t−q k,j from two independent copies X (1) (·) and X (2) (·) of a BBM and let n (1) (s) and n (2) (s) denote the number of particles of X (1) , resp. X (2) , at time s. Using (2.5), we rewrite (2.3) as (2.6) Using again Jensen's inequality for the conditional expectation ( (. . .) (2.7) Next, we bound (2.7) from above, using Jensen's inequality for the conditional expectation where F q k,l is the σ-algebra generated by the BBM X up to time q k,l , in particular we condition on q k,l . Calculating the inner expectations in (2.8), gives by completing the square. Hence, by computing the Gaussian integral. Using (2.10) and noticing that the normalization For |τ |+|σ| < √ 2, the right-hand side of (2.11) is uniformly bounded by some constant C. Since M σ,τ (t) is bounded in L p for some p > 1, the a.s. limit exists and the convergence also holds in L 1 . Moreover, E[M σ,τ (t)] = 1 and hence the limit is non-degenerate.
Next, we turn to proving the central limit theorem when σ < 1/ √ 2.
Proof of Theorem 1.4. We start with the proof of (1.16). Let a k (r) := e −r 1+ σ 2 2 −τ 2 e σx k (r)+iτ y k (r) . (2.12) Then, we can rewrite M σ,τ (t) as Hence, conditional on F r , M σ,τ (t) can be written as a sum of independent random variables. To prove a CLT, we want to use the two-dimensional Lindeberg-Feller condition (conditional on F r ). First, we take the limit t ↑ ∞. For σ < 1/ √ 2 and β ∈ B 1 , we have σ 2 + τ 2 < 1. Then, by [18, (2.14) Hence, the a.s. limit M σ,τ exists in L 2 and as t ↑ ∞ the right-hand side of (2.13) converges a.s. to by (2.14). Now, 18) together with the extra rescaling in (1.16), which converges a.s. as r ↑ ∞ to C 1 M 2σ,0 It remains to check the Lindeberg-Feller condition. We set (2.20) Let > 0 and consider We rewrite (2.21) as we have that (2.23) converges to zero as r ↑ ∞ if Observe that M 2σ,0 (r) is a L 2 -bounded martingale with mean one, if σ < 1/ √ 2. Hence, it converges a.s. and in L 1 . Consider In this section, we deal with phase B 3 and prove Theorem 1.5.
3.1. Second moment computations. We start by controlling the second moment of N σ,τ (t) defined in (1.19) in phase B 3 and its appropriately scaled version on the boundary B 1,3 .

(3.4)
Using Representation (1.6), we rewrite the right-hand side of (3.4) as where λ = σ + iρτ and (z k (t)) k≤n(t) are the particles of a BBM on T t that is independent from X(t). By conditioning on F Tt , we have that (3.5) is equal to The expectation in (3.6) is equal to e 2σx+σ(y+y )+iτ ρ(y−y ) e − y 2 +y 2 2(t−q) e −x 2 /2q . (3.7) 4 C 2 depends on σ and τ but not on ρ. We do not make this dependence explicit in our notation.
Computing first the integrals with respect to y and y , we get that (3.7) is equal to Plugging (3.8) back into (3.6), we get that (3.6) is equal to As t ↑ ∞, the term in (3.9) converges to K τ 2 +σ 2 −1 , which we call C 2 from now on. (ii) Proceeding as in Part (i), we get that Plugging (3.8) into (3.10), we get that (3.10) is equal to since σ 2 + τ 2 = 1 in B 1,3 .

3.2.
Constrained moment computation in B 3 . In this section, we continue our preparations for the proof of Theorem 1.5. These consist of computing constrained moments.
The following two Lemmata ensure that we can introduce the desired constraint. Proof. Using a second moment Chebyshev inequality, we bound the probability in (3.12) from above by Continuing as in the proof of Lemma 3.1, we rewrite (3.13) as (3.14) We rewrite the expectation in (3.14) √ t+σ(y+y )+iτ ρ(y−y ) e − y 2 +y 2 2(t−q) e −x 2 /2q . (3.15) Observe that by the computations in Lemma 3.1 for r sufficiently large Hence, it suffices to take consider the integration domain q > t − r. Now, P(y > r) < e −r/2 . To have x + y > 2σt + A √ t on that event, x > 2σt + A √ t − r must hold. By inserting this into the second moment, we have that the bound is smaller than /2 for A sufficiently large.
Then, for all > 0 and d > 0, there exists r 0 > 0 such that, for all r > r 0 , uniformly for all t sufficiently large, Proof. We use again a second moment bound. Similarly to the proof of Lemma 3.2, we bound the probability in (3.17) from above by By only keeping track of the path event for one of the particles, we get that (3.18) is bounded from above by We rewrite (3.19) as 20) where x 1 (·) is a standard Brownian motion and x 2 (t − q) is an independent N (0, t − q) distributed random variable. Calculation of the expectation in (3.20) with respect to x 2 (t− q) yields As in the proof of Lemma 3.2, we can first choose r 1 large enough such that the above integral from 0 to t − r 1 is bounded by /3. Moreover, is normal distributed with mean zero and variance t − q that is independent from x 1 (s) for s ≤ q. Then, for all R > R 2 , Observe that the intersection of the event {x(t − q) > R} and the event in the indicator in (3.21) is contained in the event Using that x 1 (s) − s q x 1 (q) = ξ k (s) is a Brownian bridge that is independent from x 1 (q) and also fromx(t − q), we bound (3.21) from above by By the same computations as in (3.7) and (3.8), we can bound (3.24) from above by It is a well known fact for Brownian bridges (see, e.g., [12,Lemma 2.3] for a precise statement) that by choosing r sufficiently large (3.25) can be made as small as we want. This finishes the proof of Lemma 3.3. Define The following lemma provides the asymptotics for all moments of (3.26) in the t → ∞ limit.
Lemma 3.4 (Moment asymptotics). Consider a branching Brownian motion with binary splitting. For β ∈ B 3 , for any A > 0

27)
with lim A→∞ C 2,A = C 2 and, for k ∈ N, we have lim r↑∞ lim t→∞ E N c,A σ,τ (t) 2k | F r = k!(C 2,A M 2σ,0 ) k a.s. and in L 1 . (3.28) Moreover, for k < k, Proof. We proceed by induction over k ∈ N. For k = 1, we observe that the claim follows directly from Lemma 3.1 together with the second moment computation done in the proof of Lemma 3.2 and Lemma 3.3.
To bound the 2k-moment, we rewrite (3.28) as (3.30) For l 1 , . . . , l 2k ≤ n(t), we can find a matching using the following algorithm: 1. Choose the two labels j, j such that d(x l j , x l j ) is maximal. Call them l 1 and l σ(1) from know on. 2. Delete them. 3. Pick l j in the remaining set and match it with the remaining l j such that d(x l j , x l j ) is maximal. Iterate.
(3.40) 5 A corresponding lower bound also holds due to the second moment computation in Lemma 3.4. Proof. The l.h.s. in (3.39) is equal to Making a change of variable y = (m 1 + 2)σq + iτ m 2 q + x, we obtain that (3.41) equals to For m 1 ≥ 1, by the Gaussian tail asymptotics, (3.42) is bounded from above by e 2m 1 σ 2 q+2σq+m 2 2 τ 2 q e m 1 σCq γ . (3.43) The expectation on the right hand side of (3.39) is equal to  For k, j ∈ [n(t)], define (cf. Fig. 2) Looking at the path of x l 1 (t), the quantity m(·) for e ⊂ path(l 1 ) before time d(x l 1 , x l j * ) where l j * satisfies the following conditions (i) m is constant between l j * and l j * −1 and the piece has length > 2r.
where length is defined in the Fig. 2. Such a l j * exists for all t > t 0 (r) because there are at most 2k − 2 points, where m it is allowed to change. We call the value of m on the path where l j * and l j * −1 between m * . m * corresponds to an time interval [R, R + ], where = (j * , t) := length(x l j * −1 (t), x l j * (t)). (3.48) Then, up to time R the minimal particle is a.s. > − √ 2R. Hence, Since we compute an expectation conditional on x l j * (R + ) < 2σ(R + ) + (R + ) γ , we obtain on this event Due to our choice of j * , we have 2σR + √ 2R < C ( ) γ for some positive constant C . By taking the expectation with respect to x l j * (R + ) − x l j * (R) only, we can write extract from J A the factor × E e 2σ(x l j * (R+ )−x l j * (R)) 1{x l j * (R + ) − x l j * (R) < 2σ + (C + 1)( ) γ } , (3.52) for l large (which by Assumption (i) on l corresponds to r large). Note that the quantity, inside the brackets in (3.52), corresponds to the same expectation but where in the underlying tree l 1 , l σ 1 branched off before time R.
Iteratively, that leads to Since k was chosen arbitrary, we know that the main contribution to the 2k-th moment comes from the term where l 1 , . . . , l k have split before time r for r large enough. We condition on F r and compute: σ(x l j (t)+x l σ(j) )+iτ (y l j (t)+y l σ(j) (t)) where b l j (r) is defined in (2.20) and N γ,A σ,τ (t − r) (j) are i.i.d. copies of N γ,A σ,τ (t − r) (j) .
The case k < k follows similarly. Take an optimal matching of the first k particles. The other particles will not be matched. Take one l 1 that has not been matched. Along its path, we can again find the first macroscopic piece on which m(·) is constant. Applying Lemma 3.5, we get that the contribution is the largest if max j∈1,...,k ,1,...,k d(l 1 , l j ) < R, for R large enough. Observe that, (3.57) Since in B 3 it holds that 1 − σ 2 − τ 2 < 0, the summands the r.h.s. of (3.57) converge to zero as t ↑ ∞. This together with the argument in the even case implies Lemma 3.4.

Proof of Theorem 1.5.
Proof of Theorem 1.5. By Lemma 3.4, conditionally on F r , the moments of N c,A σ,τ (t) converge to the moments of a N (0, C 2,A M 2σ,0 ) a.s. as t ↑ ∞ and then r ↑ ∞. Since the normal distribution is uniquely characterised by its moments, this implies convergence in distribution. Moreover, by Lemma 3.  Now, we need the following. with lim A↑∞ C 3,A = C 3 and, for k ∈ N, we have lim A↑∞ lim r↑∞ lim t→∞ E |N c,A σ,τ (t)| 2k F r = k!(C 3 M 2σ,0 ) k a.s. and in L 1 . M SH 1,0 (t) is called critical additive martingale and the rescaling appearing in the r.h.s. of (4.7) is referred to as Seneta-Heyde scaling. The limiting behaviour of M SH 1,0 in the setting of branching random walks has been first analysed in [1]. An alternative proof is given in [25]. As t → ∞, (4.7) converges in probability to a limiting random variable M SH 1,0 . Proof. The proof is just an adaptation of the result for the branching random walk (see [25,Section 6.5]).

4.3.
The boundary between phases B 2 and B 3 ; and the triple point β = (1 + i)/ √ 2. In this section, we prove the convergence of the moments of the rescaled partition function on the boundary between phases B 2 and B 3 to the moments of a Gaussian random variable with random variance in probability which is the content of Theorem 1.7.
Proof of Theorem 1.7. (i) The proof of Theorem 1.7 (i) is a modification of the proof of Theorem 1.4 (ii) in the following way. where M SH 1,0 is the martingale defined in (4.8). Moreover, for k < k, lim r↑∞ lim t→∞ E N c,A σ,τ (t) k N c,A σ,τ (t) k