Entropy reduction in Euclidean first-passage percolation

The Euclidean first-passage percolation (FPP) model of Howard and Newman is a rotationally invariant model of FPP which is built on a graph whose vertices are the points of homogeneous Poisson point process. It was shown that one has (stretched) exponential concentration of the passage time $T_n$ from $0$ to $n\mathbf{e}_1$ about its mean on scale $\sqrt{n}$, and this was used to show the bound $\mu n \leq \mathbb{E}T_n \leq \mu n + C\sqrt{n} (\log n)^a$ for $a,C>0$ on the discrepancy between the expected passage time and its deterministic approximation $\mu = \lim_n \frac{\mathbb{E}T_n}{n}$. In this paper, we introduce an inductive entropy reduction technique that gives the stronger upper bound $\mathbb{E}T_n \leq \mu n + C_k\psi(n) \log^{(k)}n$, where $\psi(n)$ is a general scale of concentration and $\log^{(k)}$ is the $k$-th iterate of $\log$. This gives evidence that the inequality $\mathbb{E}T_n - \mu n \leq C\sqrt{\mathrm{Var}~T_n}$ may hold.


INTRODUCTION
In [8], C. D. Howard and C. M. Newman introduced the following Euclidean first-passage percolation (FPP) model on R d : Let Q ⊂ R d be a rate one Poisson point process.Denote by q(x), x ∈ R d , the closest point to x in Q, breaking ties arbitrarily.Fix α > 1 and define, for k 1 and r = (q 1 , • • • , q k ), a finite sequence of points in Q, where • is the Euclidean norm.Such a sequence r = (q 1 , • • • , q k ) is called a path in Q. r can also be viewed as a subset of Q and we write r ⊂ Q. Define, for q, q ∈ Q, T (q, q ) = inf {T (r )}, where the infimum is over all finite sequences r ⊂ Q with q 1 = q and q k = q , and k is the length of r .(The condition α > 1 is imposed because if 0 α 1, then the straight line segment connecting any two Poisson points is a minimizing path for T , and the analysis becomes trivial.)For x, y ∈ R d , define T (x, y) = T (q(x), q(y)) and set T n = T (0, ne 1 ).By subadditivity, the time constant µ exists and is defined by the formula By the subadditive ergodic theorem, the convergence also holds almost surely, so that in a certain sense, T n = µn + o(n).
In this and related models (lattice FPP and continuum analogues, for example), it is customary to measure the rate of convergence in the definition of µ by splitting T n −µn into a random fluctuation and nonrandom fluctuation term: Typically the random term is analyzed using concentration inequalities (for functions of independent random variables), which lately have developed significantly.In FPP models, current bounds on random fluctuations are still quite far away from the predictions, and this presents an ongoing challenge to researchers.In contrast, there is no general method for providing upper bounds on nonrandom fluctuations of subadditive ergodic sequences.In recent years, though, techniques have been developed [1,12] to bound these nonrandom errors for many lattice models in terms of the random ones.Specifically, if one has a concentration inequality of the type for λ 0 and a suitable function ψ(n) (so far, only results with ψ(n) at least of order n (in Euclidean FPP) or n/ log n (in lattice FPP) have been proved), then one can derive the bound µn ET n µn +C ψ(n) log n.
(In fact, only the lower tail inequality is usually needed.)A natural question emerges: in these models, can one find C > 0 such that If the answer is yes, it means that the difference T n − µn (used to control geodesics, for instance) can be reasonably well approximated by T n − ET n .Furthermore, due to the general lower bounds on nonrandom fluctuations proved in [4], it would suggest that the nonrandom fluctuation term is of the same order as the random one (as is the case in exactly solvable directed last-passage percolation [5,Corollary 1.3]).This question is the focus of our paper.Although we cannot prove this inequality, we show a weaker, but close one.Specifically, our main method is an inductive "entropy reduction" technique which shows that for any k, there is a constant C k such that for large n, µn ET n µn +C k ψ(n) log (k) n, where ψ(n) is from (1.1) and log (k) n is the k-th iterate of log (see Theorem 2.5).This gives strong evidence that the answer to the above question is yes.
In the next section, we give some background on Euclidean FPP from [9] and sketch the main strategy to prove general bounds on nonrandom fluctuations in the model.In Section 2, we state our main assumptions on ψ and the four results (bounds on nonrandom fluctuations, concentration estimates, and geodesic wandering estimates) which come out of our inductive method.
1.1.Background.A geodesic between two points x, y ∈ R d is a path r ⊂ Q such that T (x, y) = T (r ).Since α > 1, geodesics exist and are unique almost surely [9,Proposition 1.1].Denote by M (x, y) the geodesic between x and y.Note that M (x, y) can also be viewed as a subset of Q.
First we quote some results from [9].Define for all n 0 and 0 x C 0 n κ 2 .
Theorem 1.3 ([9], Theorem 2.4).For any ε ∈ (0, κ 2 /2), there exist constants C 0 ,C 1 > 0 such that By a simple modification of the proof of [9, Theorem 2.4], one can show that for some constant C 1 > 0, (1.4) The factor (log n) Proof: The proof is copied from [9] for completeness.It is easily verified that, for c > 1/(2−ζ), τ(x) := τ(x) − cσ(x) satisfies τ(2x) 2 τ(x) for all large x.Iterating this n times yields τ(2 n x) 2 n τ(x) or τ(2 n x)/(2 n x) τ(x)/x.Under our hypotheses on τ and σ, τ(x)/x → ν as x → ∞, so letting n → ∞ shows that τ(x)/x ν for all large x.■ Returning to the proof of (1.3), due to the previous lemma, it suffices to prove E T 2n 2 E T n − C 1 n(log n) 1/κ 1 .Now consider the geodesic M (0, 2ne 1 ) and let q be the first point in M (0, 2ne 1 ) such that q n.Then we have T 2n = T (0, q) + T (q, 2ne 1 ).Then the proof is completed once we show that with positive probability, both of the following bounds hold: Since q is a random point, in order to prove the second bound, one needs to apply Theorem 1.1 to all pairs of the form (0, x) where x satisfies x ≈ n.Because we have to apply Theorem 1.1 at least O(n) times, if we use a union bound, we need the probability in Theorem 1.1 to be at most of the order 1 n r for some large r > 0. Taking x = C 1 (log n) 1/κ 1 in Theorem 1.1 will achieve this and thus complete the sketch of the proof.
Our main goal is to improve the log n term in Theorem 1.2.This has been done recently in a lattice FPP model and a directed polymer model in [2,3] by an entropy reduction technique, showing that one can replace the log n term by log log n.Their key idea is to exploit the dependence between passage times between nearby points to reduce the number of times a concentration result like Theorem 1.1 is applied.
The improvement from log n to log log n is important, especially when a sub-gaussian concentration bound for T n is available.For the lattice FPP model, [7] proved sub-gaussian concentration on the scale of n/ log n (extending work in [6]).Using this, [2] proved that for a directed FPP model, non-random fluctuations can be bounded by the order n log n • log log n = o( n).

These bounds
have not yet been extended to Euclidean FPP.The strongest concentration inequality to date is Theorem 1.1 of Howard and Newman.
A consequence of our main results is that one can replace the n(log n) 1/κ 1 term in Theorem 1.2 to n(φ(n)) 1/κ 1 where φ(n) can be an arbitrary iterate of log n.Our proof works under a general framework which does not depend on any particular scale of concentration.So if a sub-gaussian concentration result for Euclidean FPP is proved, then our result would immediately imply a o( n) bound in Theorem 1.2.
Notation: we use bold face letters (e.g.x, y, q) to denote elements in R d or R d −1 .Denote by • the corresponding 2 -norm and • ∞ the ∞ -norm.We use C 0 > 0 to denote a small constant and C 1 > 0 a large constant, with values that may vary from case to case.We use notation like D 2.3 to denote constants whose values may depend on k and/or r , but not on n.The subscript refers to the result number.For example, D 2.3 denotes the constant in Theorem 2.3.

MAIN RESULTS
In this section, we state the main theorems.We state our results in a general way which does not depend on any one particular concentration result.Let ψ : (0, ∞) → (0, ∞) be a real function.We assume that we have the following concentration on the scale ψ(n).
Define log (0) n = n and log (k) n = log(log (k−1) n) for k = 1, 2, 3, • • • , whenever this is well-defined.Write x = (x 1 , x 2 ) ∈ R d where x 1 ∈ R and x 2 ∈ R d −1 .Define for n 1 and k 0, Theorem 2.3.Write B 1 := B (k−1) (n) and B 2 := ne 1 + B (k−1) (n).For any k 2 and r > 0, there exists a constant D 2.3 = D 2.3 (k, r ) > 0 such that for large n Note that the scale of concentration on Theorem 2.3 is smaller than that of the next theorem (and is independent of k).This is the main reason why we can use estimates for any value of k to give improved ones for k + 1.
One key ingredient in the proof of the above result is a simple bound on | E T (x, y)−E T (x , y )| that reflects the fact that E T (x, y) is simply a function of x−y 2 .This is not true for general lattice models.Indeed, it is a standard technique (see [10,11], among many others) to decompose a difference like that from the last theorem as (Here we are writing µ u for the limit lim n T (0,nu) n , which in our model is simply u µ.)The idea then is to use information about the limiting shape for the model (for instance curvature) to control µ y−x − µ y −x directly, but then one must bound both the random and nonrandom errors on the first two lines.The bounds available for nonrandom errors are generally worse (by some logarithmic factor) than those available for random errors, so one cannot obtain better concentration for T (x, y) − T (x , y ) than the bounds on nonrandom errors.In our case, we can directly decompose and exploit the rotational invariance of E T (from the underlying Poisson process) to obtain bounds without needing control of the nonrandom error.
Theorem 2.5.Let µ be the time constant.For any k 1, there exists a constant D Define for any λ ∈ R and n 1 Theorem 2.6.Write B1 := B(n) and B2 := ne 1 + B(n).For any k 1 and r > 0, there exists a constant D 2.6 = D 2.6 (k, r ) > 0 such that for all n large and λ ∈ n/(log We will prove Theorems 2.3 to 2.6 by mathematical induction on k.Note that Theorem 2.3 is stated for k 2 while the other three theorems are stated for k 1.The framework of the mathematical induction can be summarized in the following three steps: Denote these four statements by I * , II * , III * and IV * respectively.Then they are proved in the following sequence:

Organization of the paper:
In Section 3, we prove some basic results about the Euclidean FPP model.In Section 4, we verify the initial step of the mathematical induction.In Section 5, we complete the induction step of the mathematical induction, and therefore complete the proofs of Theorems 2.3, 2.4, 2.5 and 2.6.

PRELIMINARY RESULTS
In this section, we prove some basic properties about the Euclidean FPP model under the Assumptions 2.1 and 2.2.The proof of these results are analogous to the ones when As a result of [9, Lemma 5.2], we have the following lemma.Define for x ∈ R d and n 1 Lemma 3.1.Define the events F n , for n = 1, 2, • • • , as follows: (i) There exist constants C 0 ,C 1 > 0 such that (ii) Furthermore, there exists a constant D 3.1 > 0 such that, restricted to F n , we have sup q − q : (q, q ) is a geodesic between q, q ∈ Q ∩ B (0, 4n) Proof: (The proof follows exactly from [9, Lemma 5.2], whose statement is similar but with ψ 1/α replaced by n γ for some γ ∈ (0, 1).)It is sufficient to prove (3.1).Note that B (0, 4n) can be covered with O n d ψ d /α (n) balls of radius 1  2 ψ 1/α (n).If F c n occurs, then the intersection of Q and one of these balls must be empty.Therefore where the last line uses the fact that ψ(n) > n κ 3 /2 for large n.Then the proof is completed.■ For any x, y ∈ R d define H (x, y) := E T (x, y).By the symmetry of the Poisson point process, there is a function h : R + → R + such that H (x, y) = h( x − y ) where x − y is the Euclidean norm.As a result of subadditivity, we have the following simple lemma.
Proof: By subadditivity, Reversing the roles of x and y gives the same bound for |h(y) − h(x)|.Last, we note that an immediate consequence of [8, Lemma 1] is that h(x) D 3.2 x for all x 0. ■ We also need the following simple lemma to control the difference of passage times when the endpoints do not differ too much.

Lemma 3.3.
There exists a constant D 3.3 > 0 such that, restricted to F n , for x, y, y ∈ B (0, 4n) such that y − y Proof: When restricted to F n , we have q(y) − y ψ(n) 1/α .The proof then follows from the following bound from [9, (2.14)]:

■
The last result in this section is a global concentration result which plays an important role in verifying the initial cases for the mathematical induction.
For any r > 0, there exists a constant D 3.4 = D 3.4 (r ) > 0 such that for all large n , where the events G n , n = 1, 2, • • • are defined as follows: In the rest of the proof we will replace D 3.4 by D in the definition of G n .Combining the above two bounds, when n is large, F n ∩G c n implies that there exists (x , y ) ∈ C such that when D is large.By Assumption 2.1, for any fixed pair (x , y ) ∈ C , Therefore Combining this bound with Lemma 3.1 and taking D large complete the proof.■

THE INITIAL STEP
The goal of this section is to verify the initial step of the mathematical induction.Precisely, we will prove the following three lemmas in this section.Lemmas 4.1, 4.2 and 4.3 imply the k = 1 cases of Theorems 2.4, 2.5, and 2.6 respectively.Note Lemmas 4.1 and 4.3 are actually stronger than the corresponding initial versions of the theorems.
For any r > 0, there exists a constant D 4.1 = D 4.1 (r ) > 0 such that for large n In fact, one can take D 4.1 (r ) = D 3.4 (r ).
Proof: When n is large, When D 4.1 (r ) = D 3.4 (r ), the event considered in this lemma implies G c n .Therefore Lemma 4.1 follows from Lemma 3.4 immediately.■ Remark 1.Without loss of generality we can assume κ 1 is so small that η > 1/2 (recall η from (2.1)).Then

Lemma 4.2.
There exists a constant D 4.2 > 0 such that for large n.
Proof: By Lemma 1.4, it is sufficient to show that there exists a constant D > 0 such that for all large n h(2n) The proof follows from the proof of [9, Lemma 4.1] closely.Note that restricted to F n , there ex- such that q is on the geodesic M (0, 2ne 1 ).Therefore T (0, q) + T (q, 2ne 1 ) = T (0, 2ne 1 ).Applying this to an outcome in F n ∩G n (which has positive probability), for such a q we have h(2n) = H (0, 2ne 1 ) H (0, q) + H (q, 2ne 1 ) Then by Lemma 3.2, we have min H (0, q), H (q, 2ne 1 ) Combining the above two inequalities, we have This implies (4.1) for large n.■ d max (M (x, y), (0, ne 1 )) Proof: Restricted to F n , the event considered in Lemma 4.3 implies that there exist x ∈ B1 , y ∈ B2 and q ∈ Q ∩ B (0, 4n) such that inf z∈(0,ne 1 ) q − z and q is on the geodesic from x to y, i.e., T (x, q) + T (q, y) = T (x, y).
Meanwhile, elementary geometry shows that there exists a constant C 0 > 0 such that for large n, x ∈ B1 , y ∈ B2 and q ∈ Q ∩ B (0, 4n) as above, Therefore by Lemma 4.2 and the fact that x − y 2n, Comparing (4.2) and the above bound, we have Taking D 4.3 so large that > D 3.4 , the above argument implies that when n is large The proof is completed by applying Lemmas 3.1 and 3.4.■ Remark 2. Theorem 2.6 restricts the geodesic M (0, ne 1 ) to L(λ), while Lemma 4.3 removes this restriction in the case k = 1.Therefore Lemma 4.3 implies Theorem 2.6 with k = 1.

THE INDUCTION STEP
In this section, we complete the mathematical induction step.We assume that Theorems 2.4, 2.5 and 2.6 hold for k = k 0 1.Denote these three assumptions by II, III and IV respectively.
The goal is to prove the k = k 0 + 1 cases of Theorems 2.3, 2.4, 2.5 and 2.6.Denote these four statements by I * , II * , III * and IV * respectively.Then these four statements are proved in the following sequence: For the ease of reference, we state all assumptions precisely.For simplicity, define φ(n) := log (k 0 −1) n.Recall the constants γ, β, η from (2.1).Define and

Assumption 5.2 (III).
Let µ be the time constant.There exists a constant D 5.2 > 0 such that for large n nµ E T (0, ne 1 ) nµ Recall the definition of L(λ) = L(λ, n) before Theorem 2.6.

Assumption 5.3 (IV)
. Define B1 = B and B2 = B1 + ne 1 .For any r > 0, there exists D 5.3 = D 5.3 (r ) > 0 such that for large n and λ ∈ [w(n), n − w(n)] we have Then we state the four statements that we need to prove in order to complete the mathematical induction as follows.Lemma 5.5 (II * ).For any r > 0, there exists a constant D 5.5 = D 5.5 (r ) such that for large n

Lemma 5.6 (III *
).There exists a constant D 5.6 > 0 such that for large n nµ E T (0, ne 1 ) nµ One main technique used in the proof is to apply Assumption 5.1 multiple times, and use many transformed copies of B to cover a larger region.More precisely, for any x ∈ R d , let T x : R d → R d be the linear transformation such that T x rotates e 1 to 1 x x in the plane spanned by e 1 and x, and fixes all y such that y ⊥ x and y ⊥ e 1 .For x ∈ R d , define In the proof of Lemma 5.4, Assumption 5.1 is applied to many pairs of boxes of the form x + B(x, y) and y + B(x, y).In the proof of Lemmas 5.6 and 5.7, we also apply Lemma 5.5 to pairs of the form x + B * (x, y) and y + B * (x, y).In the rest of this section, we prove some results that control the effect of rotation on such boxes.
Note that the second " ⊂ in Lemma 5.8 is optimal, in the sense that the constant ) Proof: First we show that for n c 1 (5.4) By Assumption 2.2 and monotonicity of φ(•), when n is large, This proves (5.4).Next, by (5.3) and the fact log φ(n) φ(n), we have By Lemma 5.8, this implies that 1 K +1 B(n) ⊂ T z B(n), which combined with (5.4) completes the proof of the statement about B. The statement about B * can be proved similarly.■ Given C ⊂ R d ×R d , for n 1/2 c 1 and K > 0, we say C is (c, K )-regular of order n, (or simply (c, K )-regular) if for every pair (x, y) ∈ C , we have Note that in the above definition c and K may also depend on n.As a corollary of Lemma 5.9: Corollary 5.10.If C is (c, K )-regular of order n for n 1/2 c 1 and K > 0, then we have, for every (x, y) ∈ C , Organization of the rest of this section: We will prove Lemmas 5.4, 5.5, 5.6 and 5.7 in Sections 5.1, 5.2, 5.3 and 5.4 respectively.This will complete the proof of Theorems 2.3, 2.4, 2.5 and 2.6.

See
for any fixed r > 0 and large n, where the last line uses Lemma 3.1, (5.5) and Lemma 5.11.By Assumption 2.2, ψ(n) = Ω(n κ 3 /2 ), so the first two terms in the above display are dominated by the third term.Therefore, for any fixed r and large n, Since r can be arbitrarily large, the proof of Lemma 5.4 is completed.
Illustration of the proof of Lemma 5.4.The path that follows points from x to q 1 , then to q 2 , and last to y is a geodesic.One can construct a possibly suboptimal path from x to y by taking a geodesic from x to q 1 , following the first geodesic from q 1 to q 2 , and then taking a geodesic from q 2 to y .Using a similar argument with x, y switched with x , y produces the main inequality (5.7).
Proof: First, we bound | E T (x, y) − E T (x , y)|.Elementary geometry implies that for large n where the third line uses the fact that ψ(n) = o(n 1−κ 3 ) and so n(log φ(n)) −γ − 2ψ(n) > n(log φ(n)) −γ /2 for large n, and the fourth line uses the definition η = γ + β.Combining the above bound with Lemma 3.2, we have sup Second, we prove the following concentration result: For any r > 0 and large n, To prove this, recall the definition of θ(x − y) and B(x, y) from (5.1) and (5.2).By Assumption 5.1, for every (x, y) ∈ B * 1 × L * 1 and large n, 3 )-regular.By Corollary 5.10, we have On the other hand, since where the last line use the relation Combining (5.12), (5.13) and (5.14) in (5.10), we have for all (x, y) There exists a constant C 1 > 0 such that B * 1 can be covered by copies of B , and L * 1 can be covered by copies of B .Therefore by a union bound we have when n is large.This proves (5.9).Combining (5.8) and (5.9), we complete the proof of Lemma The first term above can be bounded directly by Lemma 5.4.The second term can be bounded by the concentration bound in Assumption 2.1, which implies, for K = ((r + 1)/C 0 ) 1/κ 1 and large n, (5.16) To bound the last term in (5.15), note that for x ∈ B * 1 , y ∈ B * 2 and large n, Then by Lemma 3.2, we have (5.17) Combining Lemma 5.4, (5.16), (5.17) and (5.15), when n is large, The proof of Lemma 5.5 is completed.■ 5.3.IV + II * ⇒ III * .In this section we prove Lemma 5.6.Proof of Lemma 5.6: Write T n = T (0, ne 1 ) for n 1.By Lemma 1.4, it suffices to show that there exists a constant D > 0 such that for all large n, (5.18) Define for n 1, For some constant K > 0 to be decided later, consider the event where: (For the definition of E 1 , recall that M (0, 2ne 1 ) ⊂ Q.) Restricted to E 1 , there exists q ∈ L * such that T (0, 2ne 1 ) = T (0, q) + T (q, 2ne 1 ).

Finally, since P(E c
2 ) = P(E c 3 ), we only need to bound P(E c 2 ).Recall B * (0, x) from (5.2).By Lemma 5.5, for all x ∈ L * P sup x ∈x+ B * (0,x) Since n/2 x 2n for all x ∈ L * , then the above bound implies for large n P sup x ∈x+ B * (0,x) (5.19) Now we show that the set {0} × L * is (2, 8D 5.5 )-regular.Indeed, when n is large, x 2n and (5.20) Then for all x ∈ L * , we have x n/2 and tan θ(x) Thus the set {0} × L * is (2, 8D 5.3 )-regular.Therefore by Corollary 5.10 we have, for all x ∈ L * and when D 5.5 > 1 Then from (5.19), we have (5.21)By (5.20), L * can be covered by at most copies of B .Then by the union bound and (5.21), we have Taking K = 2D 5.5 and let r > (β + η)(d − 1), we have P(E c 2 ) → 0 as n → ∞.Therefore we have proved that P(E c ) is small as n is large.The proof of Lemma 5.6 is completed.■ 5.4.IV + II * + III * ⇒ IV * .In this section we prove Lemma 5.7.
Proof of Lemma 5.7: Let K be a constant whose value will be determined later.Define, for any Define the events E + (λ) and E (λ) for λ ∈ [w * (n), n − w * (n)] as follows: Then we have (5.22)By Assumption 5.3, for large n,

.23)
In the rest of the proof, we will prove an upper bound for P(E (λ)).See Figure 5.2 for configuration in the event E (λ). and then give a bound on P(E 1 (λ)) by Lemma 5.5.Now let us prove (5.24) first.Note that for any x ∈ B1 , y ∈ B2 and q ∈ L− , elementary geometry shows that when n is large, x − q + q − y − x − y Combining this and (5.25) we have h( x − q ) + h( q − y ) − h( x − y ) µ x − q + µ q − y − µ x − y + 4D 5.6 ψ(n)(log log φ(n)) Since E (λ) implies that there exist x ∈ B1 , y ∈ B2 and q ∈ L − (λ) such that T (x, y) = T (x, q) + T (y, q).
Combining the above two displays proves (5.24).Next we prove an upper bound for P(E 1 (λ)).Since r > 0 is arbitrary, the proof of Lemma 5.7 is then completed.■

Lemma 3 . 4 .
Define the set C ⊂ R d × R d as follows:

1 )
For any x, y ∈ R d , define B(x, y) := T x−y B( x − y ).(5.2) Note that B(x, y) is obtained by rotating B( x − y ) by an angle of θ(x − y), which maps e 1 to the direction of x − y.By the symmetry of B( x − y ), we have T x−y B( x − y ) = T y−x B( x − y ).Similarly, define B * (x, y) := T x−y B * ( x − y ).

a|Lemma 5 . 9 .
sin θ(z)|•b+| cos θ(z)|•a can not be improved.The proof of this fact is elementary and therefore omitted.Lemma 5.8 immediately implies the following results for B and B * .Suppose n c 1 and K > 0. For any z ∈ R d such that

FIGURE 5 . 2 .
FIGURE 5.2.Illustration of the event E (λ) in the proof of Lemma 5.7.The condition is that there are points x ∈ B 1 and y ∈ B 2 such that the geodesic M (x, y) contains a Poisson point in L − (λ).The overall strategy of the proof is to show that geodesics between such points are unlikely to enter L + (λ) (from the event E + (λ)) and also unlikely to enter L − (λ) (from the event E (λ), illustrated here).
1/κ 1 in (1.3) and (1.4) comes from the proof technique.Here we give a sketch of the proof of Theorem 1.2, hinging on the following result, which is [9, Lemma 4.2].