The phase transitions of the random-cluster and Potts models on slabs with $q \geq 1$ are sharp

We prove sharpness of the phase transition for the random-cluster model with $q \geq 1$ on graphs of the form $\mathcal{S} := \mathcal{G} \times S$, where $\mathcal{G}$ is a planar lattice with mild symmetry assumptions, and $S$ a finite graph. That is, for any such graph and any $q \geq 1$, there exists some parameter $p_c = p_c(\mathcal{S}, q)$, below which the model exhibits exponential decay and above which there exists a.s. an infinite cluster. The result is also valid for the random-cluster model on planar graphs with long range, compactly supported interaction. It extends to the Potts model via the Edwards-Sokal coupling.


Introduction
In the last few years, a variety of results concerning the phase transition of the random-cluster model (or FK-percolation) on planar lattices have emerged; see [3,8,5,6]. The first are specific to the self-dual setting of the square lattice, while the third, still in preparation, extends the results of the first two to isoradial graphs. Finally, the fourth paper -a companion to the present paper -proves the sharpness of the phase transition of random-cluster models on generic planar graphs with sufficient symmetry.
These recent advances offer an understanding of planar random-cluster models that approaches that of Bernoulli percolation. However, contrary to the case of Bernoulli percolation, for which exponential decay in the subcritical phase was proved for lattices of any dimension (see [1,15] and [10] for a recent short proof), the phase transition of the random-cluster model in dimensions above two is still not known to be sharp. We take a first step in this direction by proving the result for slabs, that is finite planar "slices" of a d-dimensional lattice.
Percolation on slabs has already been considered in the literature, most notably in the paper [13], where it was shown that the critical point of percolation on Z 2 × {0, 1, 2, . . . , N } (d−2) tends decreasingly to that of Z d as N → ∞. There is work in progress on the same type of result for the random-cluster model with integer q [9]. Let us also mention that Bernoulli percolation on slabs has recently been shown to exhibit a continuous phase transition [7]; a result which is also long sought for lattices of general dimension. Arguments similar to those of [9] also appear in [16] and [2].
The present paper blends the method of [6] with the techniques of [7]. It is intended as a complement to [6], focusing essentially on the new elements needed to treat the case of slabs.
Next we briefly introduce the model. For more details on the random-cluster model, we refer the reader to the monograph [12].
Consider a finite graph G = (V G , E G ). The random-cluster measure with edge-weight p ∈ [0, 1] and cluster-weight q > 0 on G is a measure φ p,q,G on configurations ω ∈ {0, 1} E G . For such a configuration ω, an edge e is said to be open (in ω) if ω(e) = 1, otherwise it is closed. The configuration ω can be seen as a subgraph of G with vertex set V G and edge-set {e ∈ E G ∶ ω(e) = 1}. A cluster is a connected component of the subgraph ω. Let o(ω), c(ω) and k(ω) denote the number of open edges, closed edges and clusters in ω, respectively. The probability of a configuration is then equal to φ p,q,G (ω) = p o(ω) (1 − p) c(ω) q k(ω) Z(p, q, G) , where Z(p, q, G) is a normalising constant called the partition function. Fix for the rest of the paper a connected planar locally-finite graph G = (V G , E G ), which is invariant under the action of some lattice Λ ≃ Z ⊕ Z, under reflection with respect to the line {(0, y), y ∈ R} and rotation by some angle θ ∈ (0, π) around 0. For simplicity we will assume in the present paper that θ = π 2 and that G is invariant under translations by the vectors (1, 0) and (0, 1).
In addition let S = (V S , E S ) be a finite graph and define the "slab" S = G × S. That is S is the graph with vertices V S = V G × V S and edges E S connecting two vertices (u, v) ∈ V S and (u ′ , v ′ ) ∈ V S if either u = u ′ and (v, v ′ ) ∈ E S or (u, u ′ ) ∈ E G and v = v ′ . Maybe the most common such example is for G = Z 2 and S = {1, . . . , n}, in which case S is a slice of thickness n of the three dimensional lattice Z 3 .
For p ∈ [0, 1] and q ≥ 1, random-cluster measures with parameters p, q may be defined on the infinite graph S by taking weak limits of measures on sequences of nested finite graphs G n tending to S (see [12,Ch. 4] or [4,Sec 4.5] for a detailed account). We call such limits infinite-volume measures. For a pair of parameters p, q, more than one such infinite-volume measure may exist; the two most notable infinite-volume measures are the free and wired ones, denoted by φ 0 p,q,S and φ 1 p,q,S , respectively. These are ordered in that, for p < p ′ and q ≥ 1, where ≤ st denotes stochastic domination. Moreover, φ 0 p,q,S and φ 1 p,q,S are the extremal measures with parameters p and q, in the sense that, if φ p,q,S is an infinite volume measure with these parameters, then φ 0 p,q,S ≤ st φ p,q,S ≤ st φ 1 p,q,S . While it is possible to have values of p for which the infinite volume measure is not unique, i.e. for which φ 0 p,q,S ≠ φ 1 p,q,S , only at most countably many such values of p exist for any fixed q ≥ 1. For p, q such that φ 0 p,q,S = φ 1 p,q,S , we will denote the unique infinite-volume measure by φ p,q,S . Theorem 1.1. Fix q ≥ 1. There exists p c = p c (S ) ∈ [0, 1] such that • for p < p c , there exists c = c(p, S ) > 0 such that for any x, y ∈ S , • for p > p c , there exists a.s. an infinite open cluster under φ 0 p,q,S . The equivalent of Theorem 1.1 is also valid for planar random-cluster models with finite range interactions; we define these next. Let J ∶ V G × V G → [0, +∞) be a function with the property that there exists a constant M ≥ 1 such that, if d G (x, y) > M, then J(x, y) = 0 (where d G is graph distance on G ). Moreover, suppose that J has the same symmetries as G . Infinite-volume random-cluster measures φ β,q,G ,J with parameters β > 0 and q ≥ 1 may be defined as before as weak limits of measures φ β,q,Gn,J on sequences of finite subgraphs G n tending toward G , where φ β,q,Gn,J is defined as φ β,q,Gn,J (ω) = ∏ x,y∈V Gn (e βJ(x,y) − 1) ω(e) q k(ω) Z(β, q, G n , J) , Z(β, q, G n , J) being a normalising constant. The same remarks about the different infinitevolume measures as in the case of slabs apply here. Results for the Potts model. The above results have direct consequences for Potts model. Consider an integer q ≥ 2 and introduce the polyhedron Ω q ⊂ R q−1 with q elements defined by the property that for any a, b ∈ Ω q where ⋅ denotes the scalar product on R q−1 . Let G = (V G , E G ) be a finite graph and β > 0. The q-state Potts model on G at inversetemperature β > 0 with free boundary conditions is defined as follows. The energy of a configura- The probability µ β,q,G of a configuration σ is defined by where Z(G, β, q) is defined in such a way that the sum of the weights over all possible configurations equals 1.
As for the random-cluster model, the q-state Potts measure with free boundary conditions µ β,q,S on the infinite graph S may be defined by taking the weak limit of measures µ β,q,Gn on sequences of nested finite graphs G n converging to S .
The Edward-Sokal coupling between the measures φ 0 p,q,S and µ β,q,S where p = 1−exp(− q q−1 β) yields the following relation for any two vertices x, y ∈ S φ 0 p,q,S [x and y are connected by a path of open edges] = µ β,q,S (σ x ⋅ σ y ). The above equation together with Theorem 1.1 imply the following corollary.
• for β > β c , there exists c ′ = c ′ (β, S ) > 0 such that for any x, y ∈ S , Likewise, Theorem 1.2 may be translated for the Potts model. If J is a function as before, define the Hamiltonian of the weighted Potts model on a finite sub-graph G of G by and the associated measure µ β,q,G,J by (1.3). Infinite volume measures µ β,q,G ,J may also be defined as above.
• for β > β c , there exists c ′ = c ′ (β, G , J) > 0 such that for any x, y ∈ G , We will not discuss further this adaptation to the Potts model. For background on the Potts model and its coupling to the random-cluster model we direct the reader to [12]. Deriving the two corollaries from Theorems 1.1 and 1.2 through the Edward-Sokal coupling is straightforward.

Notation and preparatory remarks
Notation. In the rest of the paper q ≥ 1 will be fixed and we drop it from the notation. We will only work with infinite volume measures on S , hence we will equally drop S from the notation for φ.
Thus φ p will denote any infinite volume measure on S with edge-weight p and cluster-weight q.
It will be apparent in the proofs that we always allow ourselves to alter p in a small open interval. We can therefore assume that all the values of p mentioned hereafter are such that φ p is the unique infinite-volume measure. If A is a subgraph of G , then we define A = A × S and regard this as a subgraph of S . Let u, v ∈ S be two vertices, D ⊂ S be a subgraph and ω ∈ {0, 1} S be a configuration.
We write u ← → v holds. When no confusion is possible, the configuration ω will be omitted from the notation. If D is omitted, it is assumed equal to S .
) and if it occurs we say that R is crossed horizontally (respectively vertically). An open path from A to B is called a horizontal crossing (respectively vertical crossing). When a = 0 and c = 0, we simply write C h (b, d) and C v (b, d) for the events above. When b − a > d − c, horizontal crossings are called crossings in the hard direction, while vertical ones are crossings in the easy direction. The terms are exchanged when b − a < d − c.
For g ∈ G , let B R (g) (and ∂B R (g)) be the set of vertices at distance less than or equal to R (equal to R, respectively) from z. For a point z = (g, n) ∈ S define Λ R (z) = B R (g) and ∂Λ R (z) = ∂B R (g). We call Λ R (z) the box of size R around z.

Strategy of the proof Define
For p <p c we say that φ p exhibits exponential decay since connection probabilities decay exponentially with the distance; for p > p c , φ p is supercritical, in that it contains a.s. an infinite cluster. It is immediate thatp c ≤ p c . We wish to prove that p c =p c (this is simply another way of stating the main result), and we therefore focus on the inequalityp c ≥ p c . As mentioned before, we adapt the argument of [6], which consists of three steps: • First it is proved that, for p >p c , the crossing probabilities under φ p of 2n × n rectangles in the easy direction are bounded away from 0 uniformly in n. • Building on this, in the second step, it is showed that the φ p -crossing probabilities of 2n×n rectangles in the hard direction are also bounded away from 0 uniformly in n. • Finally, in the third step, assuming thatp c < p c , it is showed that for p ∈ (p c , p c ), φ p (C h (2n, n)) → 1, as n → ∞. The first step then implies that the dual of φ p ′ exhibits exponential decay for any p ′ ∈ (p, p c ), and this contradicts the fact that p < p c .
While the first step is not specific to planar lattices, the next two steps make use of planarity, namely by "gluing" crossings and invoking duality. In Sections 4 and 5 of the present paper, we adapt the arguments used in the last two steps to the setting of slabs. An essential element is the "gluing" lemma discussed in Section 3.
Adapting the final step requires particular attention, since the dual of a random cluster measure φ p on S is not a random-cluster measure itself. To overcome this difficulty, we use certain bounds on the speed of convergence of φ p (C h (2n, n)) to 1 for p ∈ (p c , p c ).
Differential inequalities. For an event A and a configuration ω let H A (ω) be the Hamming distance between ω and A, that is the minimal number of edges whose state needs to be altered to obtain from ω a configuration ω ′ ∈ A. Thus H A is a random variable taking non-negative integer values. Moreover, If A is an increasing event, then H A is a decreasing random variable.
The following lemma is the integrated form of the differential inequality of [14], as written in [6,Rem. 2.4]. This is the cornerstone of our approach. Lemma 2.1. Let A be an increasing event depending only on the state of finitely many edges. Then, for 0 < p < p ′ < 1, The following lemma taken from [12,Thm. 3.45] will also be useful.
For any non-empty increasing event A, and any non-negative integer k,

Gluing Lemma
One of the main challenges in percolation in dimensions higher than 2 is that Jordan's theorem does not apply. As a consequence, it is difficult to connect open paths together. Indeed, contrary to planar graphs, on non-planar graphs such as slabs, paths may overlap without intersecting. The gluing lemma is a tool to overcome this obstacle for slabs or for models with finite range interactions. Here we will only present it in the context of slabs. Figure 1: The typical use of Lemma 3.1 seen from "above". The grey area is D which forms D ′ with the additional white rectangle. The blue path ensures the occurrence of B; the red cluster is the one in the definition of A . The two overlap but do not necessarily connect. Left: a configuration in Y (1) but not in Y (2) ; right: a configuration in both Y (1) and Y (2) . The overlap points are marked.
Suppose that the following deterministic topological condition is satisfied: In addition let D ′ be a subset of G containing D and A 0 be a subset of D ′ . Define A as the event that there exists an open cluster C ⊂ D ′ intersecting A 0 and that contains a path χ ⊂ D connecting A 1 and A 2 . Let B be the event that B 1 is connected to B 2 by an open path contained in D. Finally let X be the event that there exists an open cluster C ′ ⊂ D ′ that intersects A 0 and contains a path γ ⊂ D connecting B 1 to B 2 . Then the two following statements hold.
(i) There exists a constant c > 0, only depending on p, G and S, such that (ii) There exists a constant β > 0, only depending on p, G and S, such that 3) The first statement may be understood as follows. If two open paths necessarily overlap, then they have a positive probability of being connected to each other. The second statement is a quantitative version of the first, useful in Section 5. It essentially states that if A occurs with high probability, then the overlapping paths connect with high probability.
Initially a version of this lemma appeared in [7] in the context of Bernoulli percolation. Its proof does not essentially use independence; it relies on the finite-energy property, a property shared by the random-cluster model. The property states that for any configuration ω 0 and any edge e The second part of the lemma, although similar in spirit, requires several additional technical tricks. We give a full proof of the two parts below. To help legibility, we start with the simpler statement (i), we then discuss the additional elements needed to obtain (ii).
there exists an open cluster in D that contains a crossing from A 1 to A 2 , does not intersect B i and is connected to A 0 in D ′ . See Figure 1 for examples. Note that Y (1) and The rest of the proof is dedicated to bounding the probability of Y (1) . Let ⪯ be an ordering of the oriented edges of D. This induces a lexicographical ordering of the paths contained in D, which we will denote ⪯ lex .
For ω ∈ B, let γ = γ(ω) be the minimal open path (for ⪯ lex ) contained in D, from B 1 to B 2 . We call a point z ∈ γ(ω) an overlap point if there exists a cluster as in the definition of Y (1) that intersects {z}.
We now define a map Ψ ∶ Y (1) → X as follows. For ω ∈ Y (1) , because of the topological condition (3.1), there exists at least one overlap point z ∈ D. We choose arbitrarily one such overlap point z = z(ω).
We define Ψ(ω) by modifying the configuration ω inside the region Λ 2 (z) as follows. Let γ i and γ j be the first and last points, respectively, of Λ 1 (z) visited by γ. Let a 0 be a point of ∂Λ 1 (z), connected to A 0 by an open path (a 0 , . . . , a m ), with a 1 , . . . , a m ∈ D ′ ∖ Λ 1 (z). The existence of such a point is guaranteed by the fact that z is an overlap point. In Ψ(ω), edges with no endpoint in Λ 1 (z) have the same state as in ω. All edges with exactly one endpoint in Λ 1 (z) are declared closed, with the exception of (γ i−1 , γ i ), (γ j , γ j+1 ) and (a 0 , a 1 ), which are open (note that since ω ∉ X , these three edges are distinct). The edges with both endpoints in Λ 1 (z) are closed, with the exception of two open edge-disjoint paths g = (g 0 , . . . , The existence of such a modification may easily be checked and we do not give additional details here. See Figure 2 for an illustration. It is immediate that Ψ(ω) is indeed in X .
In order to compare φ p (X ) to φ p (A ∩ B), we will use the following simple relation Since ω and Ψ(ω) only differ in Λ 2 (z), the finite energy property (3.4) implies Let us now bound sup σ∈Im(Ψ) Ψ −1 (σ) . Fix σ ∈ Im(Ψ) and ω ∈ Ψ −1 (σ). Recall that γ(σ) is the minimal σ-open path contained in D, from B 1 to B 2 . (Such a path necessarily exists since σ ∈ X .) We claim that, due to the nature of the modification applied to ω in order to obtain Ψ(ω) = σ and to the fact that ⪯ lex is lexicographical, γ(σ) coincides with γ(ω) up to the first time it enters Λ 1 (z) and after the last time it exits Λ 1 (z). More precisely, we claim that γ(σ) is the concatenation of γ [0,i] (ω), (g 0 , . . . , g k ) and γ [j,n] (ω), (where n is the length of γ(ω) and i, j and (g 0 , . . . , g k ) are defined above). This fact is essential, and we give a detailed explanation below.
The local modification performed on ω in and around Λ 1 (z) to obtain Ψ(ω). The blue path is g and the red is h. Note that (g t , g t+1 ) ⪯ (h 0 , h 1 ). The central axis in the image is z.
There are three possible situations; we analyse them separately and show that each leads to a contradiction.
Let τ ′′ be the first time after τ when γ τ ′′ (σ) ∈ γ(ω). The above discussion implies that is open in ω and represents a more optimal first section for a connection from B 1 to B 2 in D.
Note that g t is the unique point x ∈ γ(σ) that is connected by a σ-open path to A 0 in D ′ ∖ γ(σ). Thus g t is determined by σ, and so is z, the first coordinate of g t . Since ω and σ differ only inside Λ 2 (z), we obtain the bound for all σ ∈ Im(Ψ).
It follows from (3.5) and the above bounds that The same bound applies to φ p (Y (2) ), and combining the two yields which leads to (3.2). ◻ The idea for the proof of the second statement is that, if A has high probability, then typically there must be a large number of overlap points, otherwise the connection between A 0 , A 1 and A 2 could easily be broken (this is proved in Lemma 3.3). Using this fact, we may associate to a configuration ω ∈ (A ∩ B) ∖ X not one, but many configurations σ ∈ X . This in turn implies, using Lemma 3.2 below, that X has much higher probability than (A ∩ B) ∖ X .
Several technical difficulties occur in this argument, and the proof requires some new ingredients. In particular, the ordering of the edges used for defining the minimal path γ(ω) needs to be random.
Let O denote the set of total orderings of oriented edges of D ′ and µ be the uniform measure on O. Set ν = φ p ⊗ µ to be the measure on {0, 1} E(S ) × O obtained as the product of φ p and µ.
As in the previous proof, set Y = (A ∩ B) ∖ X and consider ω ∈ Y . In the previous proof, we have defined overlap points. Since in the present proof we will need to work with γ (1) and γ (2) simultaneously, we will define (1)-overlap points and (2)-overlap points. For i = 1, 2, let W (i) = W (i) (ω, ⪯) be the set of points z ∈ G , such that {z} intersects γ (i) and also intersects an open cluster C of D with the following properties: Call the points of W (i) (i)-overlap points. Obviously a point can be simultaneously both a (1) and (2)-overlap point. See Figure 3 for an illustration.
Since ω ∈ Y , any crossing in D between A 1 and A 2 as in the definition of A necessarily contains at least one overlap point of each type.
We also introduce the following related notion. For i = 1, 2, we say a point z ∈ D is an (i)-almost-overlap point if there exists z ′ ∈ Λ 1 (z) and s, s ′ ∈ S such that Let U (i) (ω, ⪯) denote the set of (i)-almost-overlap points. It will be useful to note that for i = 1, 2, W (i) (ω, ⪯) ⊂ U (i) (ω, ⪯). To be precise, an (i)-almost-overlap point is a (i)-overlap point if in addition to the conditions above, z = z ′ and (z, s ′ ) is connected to both A 1 and A 2 in D.
Our aim is to bound φ p (Y ) = ν(Y × O). To do this we will split Y in three events. Since these will depend on the (random) order ⪯, we will henceforth work with couples (ω, ⪯).
Fix a constant c > 0 that we will identify later in the proof (see the end of the proof of Lemma 3.3), and define α = −c log(φ p (A c )). Define the following events: >α , but that the two latter events are not necessarily disjoint. We start by bounding the probability of the first event.
The idea behind this lemma is that, for (ω, ⪯) ∈ Y ≤α , the connection between A 1 and A 2 in D is fragile, since it only has few overlap points with γ (1) and γ (2) . Thus, it is easy to break this connection, and this leads to an upper bound on ν(Y ≤α ) in terms of φ p (A c ).
Proof Define a map Ψ ∶ Y ≤α → A c × O as follows. Take (ω, ⪯) ∈ Y ≤α . Let W (ω, ⪯) be the set of points z ∈ W (1) ∪ W (2) such that {z} is connected to A 0 without using other points of W (1) ∪ W (2) . It is essential to remark that, since ω ∈ Y , W (ω, ⪯) ≠ ∅. Figure 3: The red cluster ensures the occurrence of the event A . The blue paths are γ (1) and γ (2) . Not all intersections between the blue and red paths are overlap points, only the marked ones are. Out of these, only the doubly marked point is in W . The encircled region contains a (2)-almost-overlap point.
Let Ψ(ω, ⪯) = (σ, ⪯) with σ equal to ω for all edges with no end-point in W (ω, ⪯). The edges with at least one end-point in W (ω, ⪯) are declared closed in σ, unless they are part of γ (1) or γ (2) , in which case they remain open.
Let us show that Ψ(ω, ⪯) ∈ A c × O, that is σ ∉ A . Suppose that this is not the case, and that σ ∈ A . We know that σ ∈ B (B 1 and B 2 being united by γ (1) and γ (2) ); moreover σ ≤ ω so σ ∉ X . We therefore conclude that σ ∈ Y . Then, by the topological condition (3.1), in σ there exists at least one overlap point z 0 , which is connected to A 0 in D ′ . Let χ be a σ-open path in D ′ from z 0 to A 0 . Denote (z 1 , s 1 ) the last point on χ such that z 1 is an overlap point. Then z 1 ∈ W (ω, ⪯) and hence the edges emanating from (z 1 , s 1 ) should be closed in σ. This contradicts the fact that (z 1 , s 1 ) is connected to A 0 in σ. We have therefore shown that σ ∉ A .
We now use Lemma 3.2 to bound the probability of the event under study. Condition 1 is satisfied by definition; condition 2 is satisfied with t = 1. We focus on the third condition.
Using the definition of α and Lemma 3.2, we obtain which implies the lemma provided that c ≤ 4K log We will now focus on bounding the probabilities of Y (i) >α for i = 1, 2. More specifically we will prove the following.
By symmetry we can concentrate on bounding φ p (Y >α ). To simplify notation, we will henceforth omit the index (1).
The idea behind this lemma is that, for (ω, ⪯) ∈ Y >α , the multitude of almost-overlap points gives many opportunities for γ to connect to A 0 . Thus, the probability of Y >α should be much smaller than that of X , and this will ultimately yield the bound (3.7).
To make this heuristic rigorous, we will define a multi-valued map Ψ ∶ Y >α → 2 X ×O and apply Lemma 3.2. As suggested above, the function Ψ will consist in connecting A 0 to γ by modifying the configuration locally around certain almost-overlap points; we say we will perform a connecting surgery at these points. Not all almost-overlap points are suited to perform the connecting surgery, and we start by identifying those who are.
Fix in S an arbitrary system of geodesics uniting any pair of points s, s ′ ∈ S; such a system always exists since S is connected. We may then talk of the segment between s and s ′ , which we denote by [s, s ′ ]. As mentioned in the introduction, one may think of S = {0, . . . , k}, in which case the segment between s and s ′ > s is simply [s, s ′ ] = {s, s + 1, . . . , s ′ }. For (ω, ⪯) ∈ Y × O, we call a point z ∈ D a good almost-overlap point, if it is an almost-overlap point (with z ′ , s and s ′ as in the definition of (1)-almost-overlap points) and in addition • there is no t strictly between s and s ′ such that (z, t) ∈ γ and • if γ j = (z, s) and if t ∈ S is the first point after s when going from s to s ′ along [s, s ′ ], then (γ j , γ j+1 ) ⪯ ((z, s), (z, t)).
Let V (ω, ⪯) be the set of good almost-overlap points. The following lemma states that generally a positive proportion of almost-overlap points are good.
Lemma 3.6. For any configuration ω 0 ∈ Y and path γ 0 , whenever the conditioning is not void.
Remark 3.7. It is for the above lemma alone that the random ordering is necessary. Indeed, for a fixed ordering, there is no guarantee that enough good almost-overlap points exists.
Proof Before we start the proof, let us mention that the set of almost-overlap points U (ω, ⪯) only depends on ω and γ(ω, ⪯), not otherwise on ⪯. The set of good almost-overlap points does however depend further on ⪯.
Fix ω 0 ∈ Y and a path γ 0 . Let U 0 be the set U (ω 0 , ⪯) for an ordering ⪯ such that γ(ω 0 , ⪯) = γ 0 . (Such an ordering exists if the conditioning in (3.8) is not degenerate.) We will prove that, for each z ∈ U 0 , In other words, when averaging over the choice of the order ⪯, any almost-overlap points is good with probability at least 1 2. This implies (3.8) through a direct application of Markov's inequality.
Fix z ∈ U 0 as above. Let z ′ ∈ Λ 1 (z) and s, s ′ ∈ S closest to each other, as in the definition of almost-overlap point, i.e. with Let t ∈ S be the first point after s when going from s to s ′ along [s, s ′ ] and let f = ((z, s), (z, t)). Let e = (γ i , γ i+1 ) and e 1 , . . . , e k be the oriented edges emanating from γ i , other than e, such that there exists an ω-open path in D ∖ γ [0,i] from (z, t) to B 2 starting with e i .
Fix a real number c ′ ∈ (0, 1 2) which we will identify later and let j = ⌈c ′ α⌉. Consider a pair (ω, ⪯) ∈ Y ′ >α . For z 1 , . . . , z j ∈ V (ω, ⪯), we define ω z 1 ,...,z j as follows. By definition of V , for each z k there exists a pair of distinct points s k , s ′ k ∈ S and a point z ′ k such that and t k is the first point of S when going from s k to s ′ k along [s k , s ′ k ], then (γ i , γ i+1 ) ⪯ ((z k , s), (z k , t)).
Note that conditions (a),(b),(c) and (e) are exactly those of the definition of V . Condition (d) may be assumed by taking s ′ k as close to s k as possible. We choose points s k , s ′ k and t k as above, following some deterministic ordering when several choices are possible. Then ω z 1 ,...,z j is identical to ω except for the following edges, for each k: ) is open, • all edges of the form ((z k , t), (z ′ , t)) with t ∈ (s k , s ′ k ) are closed. We say we obtain ω z 1 ,...,z j from ω by performing a connecting surgery at each point z k . Observe that, for any choice of z 1 , . . . , z j ∈ V (ω, ⪯), ω z 1 ,...,z j ∈ X . Indeed the connecting surgery does not close the path γ(ω, ⪯), so ω z 1 ,...,z j ∈ B, and in addition any one connecting surgery ensures that Let us now verify the conditions of Lemma 3.2. The first condition of the lemma is satisfied by definition. Since V (ω, ⪯) > α 4 for all (ω, ≺) ∈ Y ′ >α , Ψ(ω, ≺) > α 4 j . Thus the second condition is satisfied with t = α 4 j Let us now study the third condition. The first thing to notice is that, for (ω, ⪯) as above and z 1 , . . . , z j ∈ V (ω, ⪯), we have γ(ω, ⪯) = γ(ω z 1 ,...,z j , ⪯). The proof of this fact follows the same line as the corresponding step in the proof of Lemma 3.1(i). Let us only mention that the connection surgery used here is such that γ(ω, ⪯) is open in ω z 1 ,...,z j . Moreover, z 1 , . . . , z j were chosen as good almost-overlap points, so that the path γ(ω, ⪯) is the minimal continuation of a crossing from B 1 to B 2 at every point z k . Indeed, if γ(ω, ⪯) j = (z k , s k ), then there are only three ω z 1 ,...,z j -open edges emanating from (z k , s k ) and (γ(ω, ⪯) j , γ(ω, ⪯) j+1 ) is preferable to the first edge in the link between (z k , s k ) and (z ′ k , s ′ k ). Fix now (σ, ⪯) = (ω z 1 ,...,z j , ⪯) ∈ Ψ(ω) for some (ω, ⪯) ∈ Y >α and z 1 , . . . , z j ∈ V (ω, ⪯). Then ((z 1 , s 1 ), . . . , (z j , s j )) are the only points (z, s) on γ(σ, ⪯) that are connected to A 0 by a σ-open path only intersecting γ(σ, ⪯) at (z, s). Thus, ω and σ only differ on the set S(σ) = ∪ j k=1 Λ 1 (z k ). We insist that, since (z 1 , s 1 ), . . . , (z j , s j ) are determined by σ, so is the set S(σ). Thus the third condition of Lemma 3.2 is satisfied, with s = j Λ 1 . The lemma then implies Let Q = ( 2q min{p,1−p} ) Λ 1 . Note that Q is a constant depending on p, G and S only and recall that j was chosen as j = ⌈c ′ α⌉. Since ν(X × O) ≤ 1, using Stirling's formula we obtain for α large enough. By choosing c ′ ∈ (0, 1 2) such that (2c ′ ) c ′ ≤ Q −1 and setting In order to obtain the conclusion of Lemma 3.5, recall that α = −c log(φ p (A c )) for some constant c depending on p, G and S only, and that α may be considered large since we restrict ourselves to small values of φ p (A c ). ◻ Let us now conclude the proof of Proposition 3.1(ii). Note that the sought bound is only relevant when φ p (A c ) is small. We will therefore prove the bound assuming φ p (A c ) is small enough for Lemma 3.5 to hold. The result may be extended to any value of φ p (A c ), with a possibly altered constant β. Recall >α . Lemmas 3.3 and 3.5 bound the ν-probability of the three events on the right hand side; we can combine them to obtain: The above yields (3.3) through basic algebra. ◻

Bounds for crossing probabilities
As mentioned in the introduction, the first step in the argument of [6] applies to non-planar graphs. We state it here without proof: 2n, n)) > 0.
The object of this section is the following result, which corresponds to the second step in [6]. It may be understood as a Russo-Seymour-Welsh type result, with the remark that it requires increasing the value of the edge-weight.
then for any p ′ > p, An immediate consequence of the two above statements is the main result of this section: Let us now focus on the proof of Proposition 4.2, the core of which lies in the following lemma.
Lemma 4.4. Let 0 < p 1 < p 2 < p 3 < 1, and suppose that There exist constants c 0 , c 1 > 0, depending only on p 1 , p 2 and p 3 , such that if n, I ∈ N are such that 1 ≤ I ≤ n 400 and In [6] it was shown that a similar statement implies Proposition 4.2 (with slightly different formulations). This step adapts readily to the present context, and we do not give more details here; the interested reader is referred to [6, Proof of Prop. 4.1]. The rest of the section is dedicated to proving Lemma 4.4. We start with some notation.
Let A 1 , . . . , A K be subsets of vertices of some rectangle R of S . For a configuration ω ∈ {0, 1} S , we say that the subsets are separated in R if A i ω,R ← → A j fails for all 1 ≤ i < j ≤ K. That is, if they are contained in distinct clusters of the configuration ω restricted to R. We say that the subsets A 1 , . . . , A K are strongly separated in R if A 1 , . . . , A K are separated in R. Here we have abusively used the notation A i for the set In other words, the sets are strongly separated if there is no open path in the rectangle whose projection on G crosses the projection of two distinct sets.
It is easy to check that, if ω is a configuration containing K strongly separated vertical crossings of some rectangle R, then Indeed, let A 1 , . . . , A K be vertical crossings of R, strongly separated in R in the configuration ω.
In particular A 1 , . . . , A K are disjoint connected sets crossing R vertically. Hence we may order them from left to right -we will assume this is already the case. Fix a self-avoiding path γ contained in R and crossing it horizontally; orient it from left to right. It intersects each set A i at least once. But for each i, since A i and A i+1 are strongly separated in R, γ must contain at least one ω-closed edge between any point of intersection with A i and the first following intersection with A i+1 . This implies that γ contains at least one closed edge in the region between A i and A i+1 for every i, hence at least K − 1 closed edges overall. This implies the desired bound.
Proof of Lemma 4.4 Fix n and I satisfying the assumptions of the lemma and set v = 1 100I (we will specify the values of the constants c 0 and c 1 later in the proof; it will be apparent that they do not depend on n and I).
In light of the above observation, to prove Lemma 4.4 we aim to show the existence of 2 I strongly separated vertical crossings of [0, 2n] × [0, n 2]. The proof follows the lines of [6,Lemma 4.3] with the essential difference that the crossings need to be strongly separated rather than simply separated.
We start of with series of claims for φ p 3 , similar to those in the proof of [6,Lemma 4.3]. In the present context, the proof of these claims will require the gluing lemma 3.1(i). Once the claims established, we use them to show that, with positive φ p 3 -probability, there exist 2 I separated crossings of [0, 2n] × [0, n 2]. Finally, we deduce that there exist 2 I strongly separated crossing with positive φ p 2 -probability, using Lemma 2.2.
In what follows, the constant c > 0 is that of (3.2); it only depends on p 3 and S. Define Claim 0. For α defined as above, we have

5)
where and c 1 = 1 5600 (this is constant c 1 that appears in Lemma 4.4).
Proof of Claim 0. Choose k ∈ [ n 8 , n 2 ] achieving the maximum in (4.4). We will show by induction on j ≥ 1 that Applying this to j = 14 v, we obtain where the first inequality is the conclusion of the gluing lemma and the second is due to the invariance under translation and rotation. Apply which is the desired conclusion. When combining the two events above using the first part of the gluing lemma, we obtain The above event has probability less than α (by definition of α), hence β ≤ (α + α c). By considering the other possibilities for the lower and higher endpoints, the claim follows. All four events revolve around the rectangle R(k). In the following, we will use translates of these events (by z ∈ G ), and we will say for instance that E (k) occurs in some rectangle R(k) + z if E (k) occurs for the translate of the configuration by −z. • F (k) occurs in the rectangle R j , • at least one of G (k) andG (k) occurs in the rectangle R j . Using a simple union bound and the estimates of Claims 1-3, we obtain Consider a configuration not in H (k) containing a vertical open crossing γ of S(k). We are now going to explain why such a crossing necessarily contains two separated crossings of S((1 − 11u)k).
Since none of the rectangles [juk, (2 + (j + 1)u)k] × [−k, k] is crossed horizontally, γ is contained in one of the rectangles R j . Fix the corresponding index j. Parametrize γ by [0, 1], with γ 0 being the lower endpoint. Since where C = Fix a sequence (u i ) i , with u i ∈ [v, 2v] and k i u i ∈ Z for 0 ≤ i < I. The existence of u i is due to the fact that v ≥ 4 n (since I ≤ n 400). Define the events I (k i ) of Claim 5 for these values of u i . Except on the event ⋃ I−1 i=0 I (k i ), any configuration with a vertical crossing of S(k 0 ) has 2 I strongly separated vertical open crossings of S(k I ).
By the union bound, claims 0 and 5 and the definitions of u and v, we obtain where the last inequality is due to the choice of I. We may choose c 0 = √ c (200C ′ ) > 0, so that the right-hand side is smaller than δ 2. But S(k 0 ) is crossed vertically with φ p 2 -probability at least δ, hence, with φ p 2 -probability at least δ 2, S(k I ) contains 2 I strongly separated vertical crossings. By the observations made before the proof, we have which directly implies the desired result. The previous section showed that for p >p c , crossing probabilities in the hard direction for 2n×n rectangles are bounded away from 0, uniformly in n. The following two results show us that these probabilities actually tend rapidly to 1 as n → ∞, for any p >p c . We start with a lemma taken from [6, Cor. 5.2] and which is valid in all dimensions. It is an integrated form of the result of [11]. We do not give the proof here, as it is identical to the one in [6].
The proof of Proposition 5.2 is based on the following lemma.
The second inequality is due to the fact that the horizontal crossings of rectangles [0, 2N ] ×[in, (i + 1)n] for 0 ≤ i < ⌊N n⌋ are disjoint and to the invariance of the measure under translation.
Let us now bound φ p (C h (2N, n)) from below. By the same induction as in Claim 0, using the quantitative gluing lemma 3.1(ii) 2⌈ N n ⌉ times, we obtain Where β > 0 is given by Lemma 3.1(ii). Using (2.2) and the fact that ⌈ N n ⌉ ≤ N 2n , the lemma follows. ◻ Proof of Proposition 5.2 Fix p < p ′ and ∆ > 0 as in the proposition. Fix ε > 0 such that p + < p ′ . We first introduce two increasing sequences (n k ) k≥k 0 ∈ N and (p k ) k≥k 0 ∈ [p, p ′ ] such that φ p k (C h (2n k , n k )) > 1 − e − 2 k , (The indices start from k 0 only for a mater of a more clear notation.) For k ≥ 1, set v(k) = (1 − e − 2 k ) 2⋅4 k − 2 ⋅ 4 k e −β 2 k . The sequence v(k) tends to 1 as k tends to infinity, so we may fix an index k 0 such that v(k) > 1 2 for all k ≥ k 0 . Set p k 0 = p and choose n k 0 ∈ N such that φ p (C h (2n, n)) > 1 − e − 2 k 0 for all n ≥ n k 0 (the choice of n k 0 is possible by hypothesis). Now define, for k ≥ k 0 , n k+1 = n k 4 k , p k+1 = p k + 2 k−1 .
We will now prove by induction that φ p k (C h (2n k , n k )) ≥ 1 − e − 2 k for all k ≥ k 0 . The statement is true for k 0 by choice of n 0 . Suppose it is true for some k ≥ k 0 . Then, based on the Lemma 5.3, , and the induction is complete. By monotonicity of p ↦ φ p , we deduce that 1 − φ p ′ (C h (2n k , n k )) ≤ e − 2 k for all k ≥ k 0 . Since n k = n k 0 4 k 0 +⋅⋅⋅+(k−1) ≤ n k 0 4 k 2 , it follows that for all k ≥ k 0 sufficiently large n ∆ k ≤ e 2 k , hence 1 − φ p ′ (C h (2n k , n k )) ≤ n −∆ k , which is the desired statement for n = n k . It remains to prove the statement for values of n in between the scales (n k ) k≥k 0 . Fix n such that n k < n < n k+1 for some k ≥ k 0 . Based on (5.2), we have φ p ′ (C h (2n, n)) ≥ φ p ′ (C h (2n k+1 , n k )) ≥ v(k).
We may now apply the gluing lemma using the events A and H n . That is, apply it with D = Finally, as a consequence of (5.3), φ p 2 (H n 1 ) > 1 − 2 −n 1 . We may therefore deduce that, for all n ≥ n 1 , φ p 2 (H n ) ≥ φ p 2 (H n 1 ) − 4