On the time constant of high dimensional first passage percolation

We study the time constant $\mu(e_{1})$ in first passage percolation on $\mathbb Z^{d}$ as a function of the dimension. We prove that if the passage times have finite mean, $$\lim_{d \to \infty} \frac{\mu(e_{1}) d}{\log d} = \frac{1}{2a},$$ where $a \in [0,\infty]$ is a constant that depends only on the behavior of the distribution of the passage times at $0$. For the same class of distributions, we also prove that the limit shape is not an Euclidean ball, nor a $d$-dimensional cube or diamond, provided that $d$ is large enough.


Introduction and main results
We study first passage percolation on Z d for d large. The model is defined as follows. We place a non-negative random variable τ e , called the passage time of the edge e, at each nearest-neighbor edge in Z d . The collection (τ e ) is assumed to be independent, identically distributed with common distribution F .
A path γ is a finite or infinite sequence of nearest neighbor edges in Z d such that each two consecutive edges in the sequence intersect. For any finite path γ we define the passage time of γ to be T (γ) = e∈γ τ e .
Given two points x, y ∈ Z d one then sets where the infimum is over all finite paths γ that start at the point x and end at y. For a review and the current state of the art of the model, we invite the readers to see the recent notes [1] or the classical paper of Kesten [7].
Here we focus on the large d behavior of the time constant and limit shape of the model. These are defined as follows. Let e 1 , . . . , e d be the coordinate vectors of Z d .
If Eτ e < ∞, µ(e 1 ) exists. See [1, Theorem 2.1] and the discussion therein. For each t ≥ 0 let where [y] is the unique point in Z d such that y ∈ [y] + [0, 1) d . The pair (Z d , T (·, ·)) is a pseudo-metric space and B(t) ∩ Z d is the (random) ball of radius t around the origin. The limit shape is defined by the famous shape theorem as follows.
where t i , i = 1, . . . 2d, are independent copies of τ e and F (0) < p c (d), (1.2) where p c (d) is the threshold for bond percolation in Z d . We write rS = {rs : s ∈ S} for any subset S ⊆ R d and r ∈ R.
Theorem 1.1 (Cox and Durrett [3]). If (1.1) and (1.2) hold, then the first passage percolation model has a limit shape. That is, there exists a deterministic, convex, compact set B in R d , such that for each ε > 0, for all t large = 1.
Moreover, the limit shape B = {x ∈ R d : µ(x) ≤ 1} has a non-empty interior and is symmetric about the axes of R d .
Despite the importance of both objects, our knowledge on µ(e 1 ) and B is almost non-existent. Finding a distribution where one can explicitly determine µ(e 1 ) is considered a difficult open task, and so is deriving further properties of B. The main purpose of this paper is to investigate such questions for d large.
We assume that the passage times have finite mean, and the existence of some constant a ∈ [0, ∞] such that, for some C > 0 in some interval [0, ε 0 ], ε 0 > 0. Here, we understand a = ∞ as lim x→0 P(τ e ≤ x) x = ∞. (1.5) Our first main result is the asymptotic behavior of µ(e 1 ) as a function of d. We now put the theorem above into historical context. The behavior of the time constant µ(e 1 ) as a function of d was considered before by Kesten [7,Section 8] and Dhar [4]. Their assumptions on the distribution of the passage times are special cases of ours. First, in [7], under (1.3), (1.4) and the additional assumptions that a ∈ (0, ∞), C = o(1) as x → 0 and τ e has a density around the origin, Kesten showed the existence of ε > 0 so that (1.7) Second, in [4], Dhar established (1.6) in the case of exponentially distributed passage times. Dhar's proof however cannot be adapted to any other distribution as, for instance, it heavily relies on the Markovian property of the ball B(t). Thus, Theorem 1.2 says that the asymptotics obtained by Dhar are valid under rather general assumptions, that include those of Kesten. Remark 1.4. If the distiribution has a mass at 0 then (1.5) clearly holds. In this case, Theorem 6.1 in [7] and the fact that the critical probability p c (d) for bond percolation in Z d decreases to 0 as d goes to infinity imply that the sequence (µ(e 1 )) d≥1 will be eventually constant equal to 0 and, of course, (1.6) holds.
A word of comment is needed here. If, for some δ > 0, the support of the distribution of the passage times is included in (δ, ∞), then it is clear that (1.6) must hold. Indeed, in this case, a = 0 and µ(e 1 ) ≥ δ for all d. It would be interesting to further study the behavior of µ(e 1 ) in this situation. As we will see, this question seems to be related to the typical length (number of edges) of a geodesic and the behavior of p c (d) as a function of d.
Our second main result excludes the d-dimensional Euclidean ball B := {x ∈ R d : µ(e 1 ) −1 } as possible limit shapes. Note that due to convexity we always have D ⊆ B ⊆ C. Remark 1.6. We prove Theorem 1.5 by showing that the intersection of B with the line d := {λ(1, . . . , 1) : λ ∈ R} is strictly contained in B and strictly contains D ∩ d . Using symmetry around d the proof may further exclude other possible limit shapes. We delay the proof until Section 5. Remark 1.7. One of the main features of the Theorem above is that d 0 can be explicitly estimated for any given values of a and C in (1.4). In the case of an exponential random variable or a uniform random variable on some interval [0, s], we show that d 0 = 269, 000 is sufficient (but certainly not optimal). We exclude the d-dimensional diamond for all d ≥ 110. See Appendix.
The fact that limit shape is not an Euclidean ball is expected to hold for all d ≥ 2. Kesten provided the first results in this direction. He showed (see [7,Remark 8.5]) that this is the case for the exponential distribution if d ≥ 10 6 . Remark 1.8. In [2], Couronné, Enriquez and Gerin also considered FPP with exponential distributed passage times. They provided a constructive way to find an upper bound of order log d/d for µ(e 1 ). They also claimed that the limit shape is not an Euclidean ball if d ≥ 35. However, the argument presented in [2, Corollary 4] seems unclear to us. Their claim is obtained using numerical results provided in Table 1 of Dhar [4]. It is unclear whether the inequality µ(35) ≤ 0.93 log 2d 2d appearing in [2] implies µ(d) ≤ C(d) log 2d 2d for some C(d) that leads to the result for d > 35. In view of (1.6), the constant C(d) must approach 1 as d → ∞, even if the numbers appearing in Table 1 in [4] are monotonically decreasing.
The rest of the paper is organized as follows. In Section 3, we will sketch the proof of Theorem 1.2. The two following sections are devoted to prove the bounds lim sup d→∞ µ(e 1 )ad log d ≤ 1/2 and lim inf d→∞ µ(e 1 )ad log d ≥ 1/2, respectively. In Section 5, we prove Theorem 1.5 by deriving a lower bound for the time constant in the diagonal direction. In the last section, we provide the quantitative bounds to control d 0 . Throughout the paper, we use e 1 , e 2 , . . . , e d to denote the canonical base vectors of Z d .

Proof Strategy of Theorem 1.2
The proof strategy is motivated by [7]. The reader will see that the upper bound follows closely from [7] and is less intricate. In this section we explain how to derive (2.1) and the differences between our approach and Kesten's proof of (1.7). First, it is known (see for instance [7, pp.246]) that µ(e 1 ) ≤ Es 0,1 wheres 0,n := inf T (γ) : γ is a path from (0, 0, . . . , 0) to some point in H n which, except for its final point, is contained in [0, 1) × R d−1 We will derive the upper bound for µ(e 1 ) by bounding Es 0,1 from above. The idea is to look for a path of length n = n(d) to H 1 that has a very small passage time, and it is contained in a subspace H of dimension p = p(d) < d. To prove the existence of such a favorable path, the second moment method is a natural approach. In [7], Kesten took p = d/2 and considered directed paths whose first (n − 1) steps go along the positive directions +e 2 , . . . , +e p+1 and then, at the last step, take the e 1 direction to reach H 1 . As a result, these paths are necessarily self-avoiding, which allows an estimate of the passage time using the sum of i.i.d. random variables. Since these paths only take the positive directions of +e 2 , . . . , +e p+1 , no more than d/2 (n−1) paths were considered.
Kesten's proof was then a trade-off between examining a large collection of paths and being able to estimates 0,1 using sum of i.i.d. random variables. His strategy led to the upper bound in (1.7). In order to get an optimal upper bound, we explore a subspace that is almost as large as the entire Z d by choosing p = d − o(d). Furthermore, we allow paths to go along any of the 2p possible directions, ±e 2 , . . . , ±e p+1 . Under such setting, we are able to examine nearly all of the paths leading to H 1 and obtain the optimal upper bound. The price to pay is that now some paths will be self-intersecting and thus we can not approximate their passage times by a sum of i.i.d.'s. Furthermore, the computation in the second moment method becomes elaborate. In the end of the day, the price is affordable as in high dimension, the majority of random walk paths are self-avoiding. The main estimation is done by carefully counting patterns of overlapping segments for a given pair of random walks in Z d . This main step is done is Section 3.4.

Setup
We are interested in the self-avoiding paths of length n from 0 to H 1 whose first (n − 1) steps use directions ±e 2 , . . . , ±e p+1 and the last step is e 1 . Denote by P n the set of all such paths. For γ ∈ P n , we write it as γ = (S 0 = 0, S 1 , S 2 , . . . , S n ), where S k ∈ Z d such that (i) S i = S j whenever i = j, (ii) S k − S k−1 ∈ {±e 2 , . . . , ±e p+1 } for 1 ≤ k ≤ n − 1, and (iii) S n − S n−1 = e 1 . Let N n,x be the number of paths γ ∈ P n such that for some δ, η > 0 fixed, but we will eventually send δ to 0.

First moment of N n,x
By definition, EN n,x can be written as We will need the following two lemmas to estimate |P n | and P(S n ≤ x). The first one is about the number of self-avoiding walks, for which the estimate has been improved over the years [5,6,8] . (a) ξ d : Lemma 3.2. Let X 1 , X 2 , . . . , be i.i.d. nonnegative random variables, satisfying (1.4). Let S n := n i=1 X i be the partial sum. Then, for all n ≥ 1 and 0 ≤ x ≤ ε 0 , there is Proof. The result follows from (1.4) and a similar calculation as in [7,Lemma 8.8].
By Lemma 3.1(a) and sub-additivity [8, pp.9], we know that Also, Lemma 3.2 and Stirling's formula imply Putting them together, we get We look at the left side. Firstly, note that x → 0 as d → ∞. In particular, Hence, with n = log d and for all d large, we have This gives a lower bound for EN n,x for all d sufficiently large: For δ ∈ (0, 1) fixed and d large, we will have δ > 1 , which implies

Proof of the Upper Bound
By definition, the second moment of N n,x can be written as Suppose for now that we are able to show that for some 0 < A < ∞, there is for all d large. By Cauchy-Schwarz inequality, we know This means with positive probability, we can find a path γ ∈ P n from 0 to H 1 such that T (γ) < x. The proof of (3.3) will be given in Section 2.5.
Let H be the subspace spanned by ±e 2 , . . . , ±e p+1 . Now we focus on the coordinates e p+2 , e p+3 , . . . , e d . For p + 2 ≤ j ≤ d, let E j be the event that there exists a path from e j to H 1 such that, except for its final point, is contained in [0, 1) × R d−1 ∩ (H + e j ) and has T (γ) ≤ x. By translation invariance and (3.3), we have Furthermore, these F j 's are independent, and if any of the events F j happens, we will havẽ Therefore, Notice that, for all d sufficiently large, Thus, for any η > 0, the second term in (3.4) vanishes as δ → 0. This gives us Es 0,1 ≤ log d 2ad as desired.

Second moment of N n,x
We are going to prove (3.3) in this section. We first rewrite the second moment according to the number l ≤ n of overlapping edges between γ and γ : Note that since we only consider γ, γ ∈ P n , which are self-avoiding, the condition {|γ ∩ γ | = l} is defined with no ambiguity to be the number of edges in γ that also appear in γ (or vice versa). In what follows, we always write When l = n, due to the fact that they both start from the origin and are self-avoiding, we know γ = γ . In this case we have When l = 0, γ and γ do not share any edges.
For other 1 ≤ l ≤ n − 1, we can write, Lemma 3.2 implies that for all 1 ≤ l ≤ n − 1 and d large, where we denote Note that EN n,x → ∞ and the front factor of the second term satisfies, for d large, where we have denoted the exponent by For δ, η > 0 fixed, and d sufficiently large, g η (δ, d) → 0 as d → ∞. Hence, we can choose d sufficiently large, such that e gη(δ,d) < 2, which yields To proceed, we need the following proposition.
When γ, γ ∈ P n and |γ ∩ γ | = l, there are two cases: (i) if S n−1 = S n−1 , it is necessary that all l overlapping edges occur in the first (n−2) steps since both γ and γ take the e 1 direction at the last step; (ii) if S n−1 = S n−1 , then γ and γ share the last edge so that there are at most (l − 1) overlapping edges in their first (n − 1) steps. Observe that the denominator of (3.10) is just the number of all pairs of paths in Z p of length (n − 1), starting from the origin. Hence, if we put uniform measure on all pairs of simple random walk paths (γ,γ ) in Z p starting from the origin and of length (n − 1) , the left side of (3.10) is bounded by Hereγ andγ can be thought as the first (n − 1) steps of γ and γ , respectively, and we abuse the notation by writing γ = (S 0 = 0, S 1 , . . . , S n−1 ),γ = (S 0 = 0, S 1 , . . . , S n−1 ). (3.13) We prove Case (i) and Case (ii) in Lemma 3.4 (i) and (ii), respectively. The proofs for both cases are very similar, based on counting the "bubbles" of two intersecting simple random walk paths. We explain in full detail the construction in Case (i), whereas for Case (ii), we just point out the difference.
Let γ and γ be two paths sampled uniformly and independently from all simple random walk paths in Z p , starting from the origin, and of length n ≤ 10 log p . Note that the constant 10 takes into account that we are actually interested in paths of length log d − 1 and p ≈ d. Such differences are negligible when d is large. Case (i) and (ii) follow from Lemma 3.4, when we replace n by (n − 1) and γ, γ byγ,γ , respectively.

Proof of Lemma 3.4 (i).
For each 1 ≤ K ≤ l, let B K denote the event that the l overlapping edges are clustered in K consecutive pieces. Let n 1 , . . . , n K be the lengths of these segments, m 1 , . . . , m K (resp. m 1 , . . . , m K ) be the indices of their starting points in γ (resp. in γ ). For example, in the left part of Figure 1, we have l = 5 and K = 2. For convenience, we also denote by A l the event |γ ∩ γ| = l and G γ (resp. G γ ) the event that γ (resp. γ ) is self-avoiding. The original probability is just Note that, on the event G γ ∩ G γ , the number of segments K must be the same in both γ and γ , i.e., the situations in the middle and on the right of Figure 1 cannot happen. When K = 1, the l overlapping edges are clustered in one segment. There are two possible alignments for the overlapping segment, either along the same or along different directions (see Figure 2). In both cases, nonoverlapping pieces of γ and γ form a "bubble"-like shape.
The first situation happens with probability no more than We have used the fact that if {S m } m≥0 is a simple random walk on Z p , then for all m ≥ 1 and t ∈ Z p , (3.14) We write all terms for the second situation (Figure 2, right), as this will give us the spirit of the general K > 1 case. The probability is no more than Combining the two situations and using n = C 0 log p, we conclude From the K = 1 case, we have the following observations, and the last observation is the most important for the general 1 < K ≤ l case.
a. The overlapping edges give the factor (1/2p) l .
b. There are either (K − 1) or K bubble segments (i.e., those that do not overlap with the other path) in γ and γ , depending on whether m 1 and m 1 are both zero or not. The total number of bubble and overlapping segments is either 2K − 1 or 2K.
c. Conditioning on the values of all 2K −1 or 2K segments of γ , there is at most one solution for the values on the segments of γ. The event that "those K overlapping segments in γ take particular values" is absorbed in the event that "each edge in these segments overlaps with γ", whereas the event "the bubble segments take particular values" occurs with probability no more than 1 2p K−1 , due to (3.14).
We now proceed to the general 1 < K ≤ l case. For each K fixed, we first compute the number of ways to divide the l overlapping edges into K (nonempty) groups of sizes n 1 , n 2 , . . . , n K , which is no more than l−1 K−1 ≤ l K−1 . Next, we determine the positions of these K overlapping segments in γ: there are n K ways to choose the starting points m 1 < m 2 < · · · < m K of these segments, and once we have the starting points, we have K! ways to associate a running length n j i to a starting point m i . This is over-counting, because if, say, m 2 −m 1 < n 3 , then the overlapping segment starting at m 1 can not be longer than n 3 . We do the same for γ . Once we have the locations of overlapping segments in γ and γ , we know exactly which segment in γ overlap with which segments in γ . There is an additional factor 2 K , which counts for the two possible directions of alignment for each overlapping segment. These together give us a combinatorial number no larger than which is an upper bound of all possible overlapping patterns (one of these is illustrated in Figure 3). Each pattern occurs with probability no more than 1 2p and hence for all 2 ≤ K ≤ l, l .
Summing over K and using the fact that K ≤ l ≤ n ≤ C 0 log p, we obtain When K = 1 and if m 1 = m 1 = 0 (i.e., γ and γ overlap at the first l − 1 edges) or m 1 = m 1 = (n−l+1) (i.e., γ at γ overlap at the last l−1 edges), there is only bubble segment in both γ and γ . Also, on the event that they are both self-avoiding and S 0 = S 0 , S n = S n , the overlapping edges must align in the same direction. This case is illustrated in Figure 4. In either situation, the probability of seeing such a "bubble" is no more than whereS m denotes a simple random walk on Z p of length m ≥ 3. The reason is that, conditioning on the event {S 2 = x} where x = 0, two of the next (m − 2) steps must go along the reverse directions of the coordinates used in the first two steps in order to return to the origin at the m-th step (note m must be even). This happens with probability no more than (m−2) 2 (2p) 2 . For all other values of m 1 and m 1 , there are at least two bubble segments (e.g., Figure 5), 0 S n = S n γ γ Figure 5: Representation of the case K = 1 with a negative alignment of the overlapping segment. In this case two bubbles must be created. Parallel segments represent identical edges.
which we can easily estimate using (3.14). We use 1 = K ≤ l ≤ n ≤ C 0 log p again and compute For K = 2, if γ or γ has 2 bubble segments, we can use the strategy in Case (i) and (3.14). If there is only 1 bubble segments, then the two overlapping segments must attach to S 0 = S 0 = 0 and S n = S n . Since γ and γ are self-avoiding, this situation is quite similar to the fusion of bubbles in Figure 4 (a) and (b). The bubble in the middle can be easily estimated using (3.15). We leave the details to the readers.

Proof of the Lower Bound
In this section we establish the desired lower bound. We assume the existence of a constant a ∈ [0, ∞) so that Note that this condition is weaker than (1.4). For the proof of Proposition 4.1, we will need a few lemmas. Let b n = T (0, H n ) be the passage time from the origin to the hyperplane H n . Proof. This is a consequence of the Borel-Cantelli lemma as the time constant µ(e 1 ) is also the limit of b n /n as n goes to infinity [7, Equation (1.13)].
We still need one combinatorial estimate that we take from [7, (6.20)].
Lemma 4.4. The number N k,n of lattice paths in Z d from 0 to H n of k steps is at most for any ρ ≥ 0.
Proof of Proposition 4.1. For 0 < δ < 1 fixed choose x as in (4.2). We will use the union bound Choose δ ≤ 1/2, the second sum in the right side of (4.7) is bounded above by 2en(1−δ) log d which is summable in n. On the other hand, if we write z = k/n, a little algebra implies that the first sum in (4.7) is bounded above by (4.8) The term inside the large square bracket above is bounded by As for any c > 0, the function f (z) = (c/z) z has a maximum value equal to e c/e on 0 ≤ z ≤ c, we obtain that (4.9) is bounded above by We now end this subsection with the proof of Theorem 1.2 in the case a = ∞ and a = 0. In this case, assumption (1.5) implies that for any M > 0 it is possible to find ε > 0 such that for any x < ε, P(τ e ≤ x) ≥ xM . Let Y M be a random variable with density f (y) = M y on [0, ε] and such that for any t ∈ R P(Y M ≤ t) ≤ P(τ e ≤ t).
As P(Y M ≤ ε) ≤ P(τ e ≤ ε), this random variable can be constructed by simply choosing a non-decreasing function on [ε, ∞) that has limit 1 at ∞ and is bounded above by P(τ e ≤ t). This way, we can use the comparison theorem of van den Berg-Kesten [10, Theorem 2.13] to obtain for any d ≥ 2, µ(e 1 ) ≤ µ Y M (e 1 ), where µ Y M is the time constant for FPP in Z d with passage times distributed according to Y M . Since Y M satisfies the hypothesis of Theorem 1.2 with a = M , we get Taking M to infinity gives us the desired result.
The case a = 0 is similar. For any m > 0 we construct a random variable Y m that dominates τ e as P(Y m ≤ t) ≥ P(τ e ≤ t) and satisfies (1.4) with a = m. Van den Berg-Kesten comparison theorem combined with the result for a > 0 implies The result follows by taking m to zero.

Application to the limit shape
In this section, we will exclude certain candidates of possible limit shapes in high dimension, including the Euclidean ball. The method here is the same as the one used by Kesten [7]. We will compare the time constant µ(e 1 ) in the e 1 direction with time constant µ * in the diagonal direction. Here µ * is defined as where J n is the hyperplane defined by J n := {(x 1 , . . . , x d ) : x 1 + x 2 + · · · + x d = n √ d} and the limit hold in L 1 . Without any loss in what follows, we slightly abuse our notation by taking √ d as the smallest integer greater than the square root of d. It has been shown in [2] that, when τ e follows a standard exponential distribution, then for d ≥ 2, where α * is the non null solution of coth α = α. Recently, still under the assumption of exponential passage times, Martinsson [9] proved a matching upper bound establishing lim d→∞ √ dµ * = α 2 * − 1 2 .

A lower bound in the diagonal direction
We start by showing that the lower bound (5.1) is also true under our setting in the large d limit: Theorem 5.1. Suppose the edge weight distribution satisfies (1.4) and µ * = µ * d is the time constant in the diagonal direction as defined above. Then Proof. For δ ∈ (0, 1), we may always choose d large enough such that Recall that ε 0 is the right endpoint of the interval [0, ε 0 ] on which the distribution of τ e satisfies (1.4) with constants a and C. We set x = due to (5.2). Then for any fixed n ∈ N and any k ≥ n √ d, we can repeat the computation before (4.4) to obtain To see this, one can just follow the proof of Lemma 4.3 with y = k n √ d and γ = k/(nx √ d), provided (4.5) holds for our choice of x and any y ≥ 1, i.e., √ 2a This is always possible by choosing d large enough. Next, we use the upper bound for the number D (n) k of self-avoiding walks of length k from 0 to J n from Lemma 3 of [2]:

Hence, we have
If we let where the t i 's are independent copies of τ e , we see by construction, Hence, Our assumptions on the distribution of τ e imply the existence of a positive constant c, independent of d, such that EY ≤ c/d for all d ≥ 2. Indeed, by hypothesis (1.4) and the fact that a > 0, we can find δ, ε > 0 so that for all x ∈ [0, δ] and such that P(τ < δ) < 1. Choosing a costant m > Eτ e , we write where in the first integral we used hypothesis (1.4) and in the last one we used Markov's inequality. As P(τ e > δ) < 1, we can find c > 0 such that all three terms are bounded above by c/d, for all d ≥ 2. Combining this with Proposition 4.1, we have for any d large enough proving (5.6).

Appendix
In this section, we show how to compute a quantitative upper bound of µ(e 1 ). This bound will only depend on d, Eτ e and a, ε 0 , C satisfying (1.4). We proceed as follows. First, slightly modifying (3.4) in Section 3, we notice that for any A > 0, δ ∈ (0, 1) and B = 0 satisfying E(N 2 n,x ) ≤ A(EN n,x ) 2 and y := and any η > 0, (5.10) Letting Υ(A, B, δ, η) be the right side of (5.10), we obtain where the infimum is taken over all η > 0 and A, B, δ satisfying (5.9). As we will see, the main task here is to find any (but preferably the smallest) A that satisfies (5.9). In Section 3.4, we found such a number for any given δ ∈ (0, 1) and η > 0. Indeed, we saw that we can choose any A = A(δ, η) such that where upper bounds of 1/EN n,x , f a,C (δ, d), and g η (δ, d) can be recovered from (3.2), (3.8) and (3.9), respectively, once we are given δ, η, d and the parameters a, C in (1.4) for the distribution of τ e . The issue now is to control the o(p −1/2 ) term, where p = d− d δ 1+η log d > 0. This will be done in the next section of the appendix. In the last section, we provide a few specific examples of these computations.

A refinement of Lemma 3.4
To get a quantitative estimate of the lower bound of A using (5.12), we need to get control of the o(p −1/2 ) term. This term comes exactly from the estimate on the probabilities in Lemma 3.4. We will control these probabilities by computing the combinatorics in the "bubble" argument more precisely. We start from the two cases defined in (3.11) and (3.12). For the first case, it is not difficult to see from the proof of Lemma 3.4(i) that, where the right side is obtained by replacing n in Lemma 3.4(i) by (n − 1) and following the combinatorics there. For the second case, we provide more detail here. As before, we letγ andγ be the paths obtained by removing the last step of γ and γ , respectively. We use B K (γ,γ ) to denote the event that the overlapping edges inγ andγ are clustered in K segments. We compute K = 1, 2, 3 case separately, whereas all K ≥ 4 cases are considered together. When K = 1 and the overlapping segments in γ and γ align in the same direction, there can be either one bubble or two bubbles. The "one-bubble" diagram is in Figure 4. Hence When K = 2 and there is only one bubble, both overlapping segments have to align in the same direction and attach to the start and endpoint ofγ andγ . This diagram occurs with probability no more than where (l − 2) is the number of ways of grouping the (l − 1) overlapping edges into two consecutive overlapping segments, 2n−2l−2 2p 2 is the factor for the middle bubble, applying (3.15). We can also estimate the probabilities for other types of alignments for K = 2 case, which gives P(Case (ii); B 2 (γ,γ )) ≤ Combining everything above gives a quantitative upper bound for P(Case (ii)). The quantitative upper bound can be reduced even further by computing P(Case (ii); B K (γ,γ )) separately for more K's, but we stop here. Therefore, o(p −1/2 ) is no more than summing (I) through (V).

Special Case: Exponential Distribution
Limit shape is not a d-dimensional cube for d ≥ 269, 000.
For the special case of τ e following a standard exponential distribution, we have a = 1, C = o(1), and Eτ e = 1. Moreover, µ * ≥ √ α 2 * −1 2 √ d for all d ≥ 2. To see this, one can either check [2] or go through the proof of Theorem 5.1, using the fact that (5.3) holds for δ = 0.
To estimate µ(e 1 ), instead of using Lemma 3.2, we can compute P(S n ≤ x) explicitly in this case, P(S n ≤ x) = x 0 y n−1 e −y (n − 1)! dy := γ(n, x) where γ(n, x) is the cumulative distribution function of a gamma distribution with shape parameter n and scale parameter 1. This allows us to replace the ratio P(S n−l ≤x) P(Sn≤x) by γ(n−l,x) γ(n,x) . We also use γ(n, x) to estimate 1/EN n,x 1 EN n,x 1 γ(n, x) .
Limit shape is not a d-dimensional diamond for any d ≥ 110.
Recall that Y := min{t 1 , · · · , t d }, where t i 's are i.i.d. copies of τ e . For the standard exponential distribution we have EY = d −1 in (5.7) so µ * ≤ d −1/2 for any d ≥ 2. At the same time, we can take δ = 0 in Lemma 4.3, which implies that the exponent of δ in (4.8) changes from 2 to 1. This leads that (4.7) and (4.10) are summable for any δ ∈ (0, 1) and all d such that for this choice of δ and d. This implies that the limit shape is not a d-dimensional cube (as µ * < √ dµ(e 1 )) for any d that satisfies d > exp 2 1 − δ and (5.15). Choosing δ = (2 + log 2)/(4 + log 2) we see that the equations above are satisfied for any d ≥ 110.
If τ e is not exponentially distributed and we only have EY ≤ d −1 , for instance, when τ e ∼ U[0, 1], we must keep the exponent of δ in (4.8) equal to 2 and the choice of δ = 0.669 excludes the d-dimensional diamond for d ≥ 416.