Noise sensitivity and Voronoi percolation

In this paper we study noise sensitivity and threshold phenomena for Poisson Voronoi percolation on $\mathbb{R}^2$. In the setting of Boolean functions, both threshold phenomena and noise sensitivity can be understood via the study of randomized algorithms. Together with a simple discretization argument, such techniques apply also to the continuum setting. Via the study of a suitable algorithm we show that box-crossing events in Voronoi percolation are noise sensitive and present a threshold phenomenon with polynomial window. We also study the effect of other kinds of perturbations, and emphasize the fact that the techniques we use apply for a broad range of models.


Introduction
The concept of a Boolean function, f : {0, 1} n → {0, 1}, is of fundamental importance in theoretical computer science. Moreover, many of the most well-studied problems in the intersection between combinatorics and probability theory may be phrased in terms of (often monotone) Boolean functions. One is, in this context, interested in the typical behaviour of a Boolean function for an element in {0, 1} n chosen according to product measure with marginal density p, henceforth denoted by P p . The study of Boolean functions has led to a vast literature on a range of fascinating phenomena, such as the existence of thresholds and the effect of small perturbations, see e.g. [12,15].
Threshold phenomena of monotone Boolean functions were first discovered by Erdős and Rényi [9] in their pioneering study of random graphs.
The existence of a sharp threshold is the essence of Kesten's celebrated 1980 proof that the critical probability for the existence of an infinite connected component in bond percolation on Z 2 equals 1 /2 [14]. A sequence (f n ) n≥1 of monotone 1 Boolean functions f n : {0, 1} n → {0, 1} is said to have a threshold at p ∈ (0, 1) if, for every ǫ > 0, we have The understanding of thresholds has increased with works by Russo [17], Kahn, Kalai and Linial [13], Friedgut and Kalai [10], and Talagrand [19].
The notion of noise sensitivity was introduced in a seminal paper by Benjamini, Kalai and Schramm [5]. Given ω ∈ {0, 1} n , we obtain an ǫperturbation ω ǫ of ω by resampling each bit of ω independently with probability ǫ. A sequence (f n ) n≥1 of functions f n : {0, 1} n → {0, 1} is said to be noise sensitive at level p (NS p for short) if f n (ω) and f n (ω ǫ ) are asymptotically uncorrelated, i.e., if (1.1) The study of noise sensitivity has led to a detailed understanding of certain planar percolation models, both discrete: Benjamini, Kalai and Schramm [5], Schramm and Steif [18], Garban, Pete and Schramm [11], and in the continuum: Ahlberg, Broman, Griffiths and Morris [1], and Ahlberg, Griffiths, Morris and Tassion [2]. In this paper we study threshold phenomena and the effect of small perturbations in the context of Poisson Voronoi percolation on R 2 . Our contributions in this direction are two-fold. First, we describe the discretization method developed in [1], by which we reduce the continuum problem to its discrete counterpart, and emphasize the close relation between threshold phenomena and noise sensitivity of Boolean functions via the study of randomized algorithms. Combining the two techniques we derive quantitative estimates on the width of the threshold window and the rate of decorrelation in (1.1). Second, we discuss a range of different but related notions of perturbations in the context of Voronoi percolation. Some of these notions we examine in detail, whereas other are left as open problems.
We remark that the application of the discretization approach is here somewhat simpler than as originally developed in [1]. Moreover, the techniques we use apply to a range of continuum percolation models such as Poisson Boolean percolation and confetti percolation, as opposed to the approach in [2] that exploits colour-switching tricks. For self-dual models, such as Voronoi and confetti percolation, our approach offers an alternative proof that the critical probability for percolation equals 1 /2, as originally proved by Bollobás and Riordan [6]. In addition, the quantitative estimates that we obtain on the size of the threshold window are new. We have chosen to present our results in terms of Voronoi percolation as this model offers a range of possibilities when it comes to different perturbations.
Description of Voronoi percolation. Poisson Voronoi percolation is a model for the study of long-range connections in a two-colouring of R 2 based on a tessellation. The large-scale behaviour in models of this kind is well-known to be governed by its behaviour in finite regions, and we shall for this reason work with the restriction of the model to the unit square. Let, hence, S := [0, 1] 2 and let Ω denote the space of finite subsets of S × {0, 1}, equipped with the Borel sigma algebra. Formally we construct a Voronoi configuration on S based on a Poisson point process η on Ω with intensity measure nλ S ⊗ [pδ 1 + (1 − p)δ 0 ], where λ S denotes Lebesgue measure on S.
Given η ∈ Ω, we define the Voronoi cell associated to (x, u) ∈ η as where d denotes the Euclidean distance. Based on the tessellation we declare a point in S red or blue depending on whether it is contained in the cell corresponding to a point in η with u-coordinate 0 or 1, respectively. 2 To rule out degenerate cases, we colour all points in S red in the case that η = ∅. We shall denote the associated measure by P n,p , and we will occasionally suppress the subscript to ease the notation. Given a rectangle R ⊆ S, let H R denote the event defined by the existence of a continuous blue path crossing R horizontally, and let f R : Ω → {0, 1} denote the indicator of the event H R . Conditioned on η = ∅, at p = 1 /2 the model is self-dual, meaning that the red and blue components are equidistributed. Since any rectangle R ⊆ S is either crossed horizontally by a blue path or vertically by a red path, it follows by symmetry that 3 Indeed, the function f R is non-degenerate at p = 1 /2 for any rectangle R ⊆ S: There exists a constant c 1 > 0, depending only on the aspect ratio of R, 2 It is not hard to see that, with probability one, every Voronoi cell is a closed bounded convex set. A point on the boundary of some set may belong to more than one cell, but no point of S can belong to more than three cells. Besides, if two cells share a vertex, they share an entire edge. We can therefore ignore the fact that points on the boundary of two cells may be declared both red and blue. 3 Equality would here hold would it not be for the possibility that η may be empty.
such that uniformly in n. This was first proved by Tassion [20] for Voronoi percolation on R 2 , and later extended in [2] to subsets of R 2 with boundary. The box-crossing property in (1.2) is a typical critical phenomenon and a suggestive indication that the critical threshold for the existence of an unbounded connected blue component in Poisson Voronoi percolation on R 2 equals 1 /2.
Description of results. In the continuum setting, a natural notion of perturbation of a Voronoi configuration is obtained as follows. For ǫ ∈ (0, 1) let η(ǫ) be obtained from η by first thinning η by a factor 1 − ǫ and then sprinkling an independent density of ǫn points to regain the initial density n. The proportion p of blue points are in each step kept constant. We shall say that the function f R : Ω → {0, 1}, encoding the existence of a blue crossing of the rectangle R, is noise sensitive at level p if, for every ǫ > 0, we have Moreover, we say that f R has positive noise sensitivity exponent if (1.3) holds with ǫ replaced by ǫ n = n −α for some α > 0. Notice that in (1.3) we have defined what it means for a single function to be noise sensitive, contrary to the discrete setting, where a sequence of functions was considered. The two definitions are the natural analogues of one another, and the reason for the difference lies in how dimensionality is expressed differently in the two settings.
Our first theorem shows that box crossings in Poisson Voronoi percolation are noise sensitive at the critical parameter p = 1 /2, and that the probability of a horizontal blue crossing of a rectangle R tends to either 0 or 1 outside of a polynomial-sized window around 1 /2. Theorem 1.1. For every rectangle R ⊆ S, the function f R is noise sensitive at level p = 1 /2 with a positive noise sensitivity exponent. Moreover, there exists a constant γ > 0 such that We remark the fact that P p [f R = 1] converges to either zero or one for p = 1 /2, together with Cauchy-Schwarz' inequality, implies that Voronoi percolation is trivially noise sensitive for p = 1 /2. In addition, the above provides an alternative proof of Bollobás and Riordan's theorem that the critical probability for Poisson Voronoi percolation on R 2 equals 1 /2.
One way to think of the perturbation in (1.3) is as the following dynamical process evolving in time: Let points appear in S × {0, 1} at rate n, where they remain for an exponentially distributed time before disappearing. The measure P n, 1 /2 is stationary for this process, and for ǫ = 1 − e −t the pair (η, η(ǫ)) corresponds to the dynamical process observed at times 0 and t.
In greater generality we may think of a perturbation as a reversible timehomogeneous Markov process (η(t)) t≥0 on Ω evolving in equilibrium. For each such process, the Markov property and reversibility together give that Hence, for each dynamical process of this kind, the correlation between two points in time measures the amount of information in some sigma algebra F -the sigma algebra generated by the glimpse of the process in one of the time points -and being sensitive with respect to this information is equivalent to Clearly, the more information contained in F the larger the variance. This indicates, in particular, that more conservative dynamics tend to affect a system to a lesser extent. Two natural notions of perturbations that conserve the number of points are • re-randomize colours of a small proportion of points; • re-randomize locations of a small proportion of points.
The former of these two notions was studied in [2], where the authors showed that the existence of crossings in Voronoi percolation are sensitive with respect to resampling a small proportion of the colours. The latter we study in this paper, and show that Voronoi crossings are sensitive also with respect to relocation of points within S. Theorem 1.2. Let η * be obtained from η by re-randomizing the location of each point in η independently and uniformly within S with probability ǫ > 0. For every ǫ > 0 and rectangle R ⊆ S, we have We also mention two further examples of perturbations for which the techniques explored in this work are insufficient. The first is to relocate points of a given colour while points of the other colour are kept fixed. The second consists in perturbing the location of each point by performing an independent Brownian motion run for time t = ǫ/ √ n. We consider both of these open problems interesting for future work.
Proof overview. We will follow the approach developed in [1], and revisited in [4], by which the continuum problem is reduced to its discrete counterpart via a two-stage construction. The central idea is to consider a Poisson point process η k on Ω chosen according to P kn,p for some k ≥ 1, and obtain a configuration η from η k via thinning. Conditional on η k , we may think of η as an element in, and f R as a function on, {0, 1} η k . Conditional on η k , we will be able to study the behaviour of f R via techniques developed for the analysis of Boolean functions.
Russo's approximate 0-1 law says that any sequence of monotone Boolean functions for which the influence of each bit tends to zero exhibits a threshold behaviour [17]. A more modern approach to threshold phenomena comes from randomized algorithms via the OSSS inequality [16]. That randomized algorithms can be used to study threshold phenomena has previously been observed by Gady Kozma (see the appendix of [3]) and in a recent paper by Duminil-Copin, Raoufi and Tassion [8]. The latter also gives an alternative proof of the result due to Bollobás and Riordan that p c = 1 /2 for Voronoi percolation on R 2 . Randomized algorithms are also connected to noise sensitivity via the Schramm-Steif revealment Theorem [18]. In order to prove Theorem 1.1 we shall thus devise an algorithm that, conditional on η k , queries points in η k sequentially until the outcome of f R (η) is determined. If, with high probability, the algorithm has low revealment, that is, is unlikely to query any specific point in η k , then the result will follow.
The proof of Theorem 1.2 will also rely on the reduction to a discrete setting. The dynamical process studied there is conservative, and in that sense related to the concept of exclusion sensitivity studied by Broman, Garban and Steif [7]. We shall follow their approach, and instead of a direct study of the conservative dynamics, we shall show that there is a coupling between (η, η(ǫ)) and (η, η * ) such that (f R (η), f R (η(ǫ))) and (f R (η), f R (η * )) agree with high probability. This will be possible due to a result in [7] which says that any noise sensitive sequence of Boolean functions (f n ) n≥1 is unlikely to change when resampling up to order √ n of the variables. The result then follows by Theorem 1.1 and the observation that Structure of the paper. Tools and techniques from the analysis of Boolean functions will be central in the remainder of this paper. We shall in Section 2 begin with a brief review of these, centering on the use of randomized algorithms and their revealment. In Section 3 we outline the discretization method developed in [1], which will allow for these techniques to be applied in the setting of Voronoi percolation. In Section 4 we describe an algorithm that will be used to prove Theorem 1.1, and estimate its revealment. The proof of Theorem 1.1 is then given in Section 5, and Sections 6 and 7 are dedicated to study the effect of alternative perturbations, and to prove Theorem 1.2.
Acknowledgements. The authors thank Augusto Teixeira for valuable discussions. DA thanks Swedish Research Council for financial support through grant 637-2013-7302. RB thanks FAPERJ for financial support through grant E-26/202.231/2015.

Analysis of Boolean functions
In the analysis of Boolean functions, discrete Fourier techniques have become an indispensable tool. Although phenomena such as sharp thresholds and noise sensitivity can be directly linked to the spectrum of the Fourier-Walsh decomposition of a Boolean function, it is often a very challenging task to obtain precise estimates on the spectrum itself. A range of techniques have therefore been developed in order to relate such phenomena to notions such as influence of variables and revealment of algorithms, which are typically more tractable quantities to estimate.
In this section, we review some results connecting influences and revealment to threshold behaviour and noise sensitivity. We shall avoid the discussion of Fourier techniques, that lie behind several of the results we describe, and refer the reader to the books [12] and [15] for a more extensive treatment.

Influence of variables
where σ k is the operator that changes ω at position k from ω k to 1−ω k . Recall that a Boolean function is called . It is well-known that many monotone Boolean functions exhibit a threshold phenomenon, where the probability P p [f = 1] increases from close to 0 to close to 1 in a narrow window -the threshold window. The central role of influences in the understanding of this phenomenon is emphasized by the Margulis-Russo formula. It says that, for any monotone Russo's approximate 0-1 law [17] gives the first general condition for the existence of a threshold. Russo showed that for every ǫ > 0 there exists δ > 0 such that if Inf p k (f ) ≤ δ uniformly in k and p, then P p (f = 1) transitions from below ǫ to above 1−ǫ in a window of width at most ǫ. Later works [13,10,19] have obtained a more precise formulation of Russo's theorem that allows one to get a quantitative bound on the width of the threshold window.
Influences are likewise fundamentally connected to the notion of noise sensitivity. The BKS Theorem, due to Benjamini, Kalai and Schramm [5], says that a sufficient condition for a sequence (f n ) n≥1 of Boolean functions to be noise sensitive at level p is that For monotone functions this condition is also necessary.

Revealment of algorithms
A (randomized) algorithm is a rule which queries the bits of ω ∈ {0, 1} n in a random order, which is allowed to depend on what has been seen so far, and outputs either 0 or 1. An algorithm is said to determine f if its output equals f (ω) for each ω ∈ {0, 1} n . The revealment of an algorithm A with respect to K ⊆ [n] is defined as In order to verify the condition in (2.3), Benjamini, Kalai and Schramm [5] devised a method involving algorithms. This method was developed further in later work by Schramm and Steif [18]. In essence, this method shows that a sequence of functions is noise sensitive if there exists (a sequence of) algorithms that determines f n without being likely to query any specific bit. The next proposition, due to Schramm and Steif [18], gives an explicit formulation of this last statement.
Since the correlation is non-negative, it is immediate from the proposition above that a sequence (f n ) n≥1 is noise sensitive if there exists an algorithm A determining f n with revealment tending to zero. Moreover, if δ p (A, [n]) decays polynomially fast, then the sequence (f n ) n≥1 has positive noise sensitivity exponent.
Randomized algorithms have also been related to influences and threshold phenomena via the following inequality, due to O'Donnell, Saks, Schramm and Servedio [16].
1} be a Boolean function and A an algorithm that determines f . Then, for every p ∈ (0, 1), we have The above inequality implies, in particular, that Inf p k (f ), and hence, together with the Margulis-Russo formula, one concludes that monotone Boolean functions satisfy the inequality d dp As we shall see, we will, via the study of algorithms, be able to obtain polynomial bounds on the width of the threshold window of certain Boolean functions, where methods based on influences would give logarithmic bounds, see e.g. [10]. Somewhat less standard is the following upper bound on the sum of influences in terms of the revealment: For every function f : The former of the two inequalities is immediate from Cauchy-Schwarz' inequality, whereas the latter follows from (a variant of) the Schramm-Steif revealment Theorem. Although we are not aware of an application of this kind, this inequality provides a way to obtain a lower bound on the width of the threshold window for monotone Boolean functions that is sharper than the elementary lower bound of order 1 / √ n.

Continuum to discrete
We now begin to set the stage for the proof of Theorem 1.1. Our approach will be based on a method developed in [1], and revisited in [4], that allows one to reduce the continuum problem at hand to its discrete counterpart via a two-stage construction of the continuum process.
Fix an integer k ≥ 2 and choose η k ∈ Ω distributed as P kn,p . Let η be obtained from η k by independently including each point of η k with probability 1 /k. Notice that η is distributed according to P n,p , and that conditional on η k , we may consider η as an element in {0, 1} η k chosen according to P1 /k .
Recall the notation (η, η(ǫ)) for a pair of configurations in Ω distributed according to P n,p , where the latter is an ǫ-perturbation of the former. The two-stage construction gives an alternative way to obtain a pair of configurations (η, η ǫ ) where, conditional on η k , the latter is obtained by an ǫperturbation of the former seen as elements in {0, 1} η k . Using the fact that η and η k \ η are independent, it is for ǫ ′ ≤ 1 − 1 /k and ǫ = ǫ ′ /(1 − 1 /k) straightforward to verify that (η, η(ǫ ′ )) and (η, η ǫ ) are equal in distribution.
The two-stage construction thus leads us to the identity In order to prove that f R is noise sensitive it will thus suffice to prove that each term in the right-hand side of (3.1) is small for large n. To prove that the variance term, for fixed k, tends to zero as n tends to infinity turns out to be equivalent to the original problem. To see this, let η ′ and η ′′ be obtained independently from η k by keeping each point with probability 1 /k. Then, for ǫ ′ = 1 − 1 /k the joint law of (η ′ , η ′′ ) equals that of (η, η(ǫ ′ )), and hence 4 However, we shall in Lemma 3.1 see that the expression in (3.2) tends to zero as k → ∞. The goal will then be to show that, for large k, conditional on η k , the function f R : {0, 1} η k → {0, 1} is noise sensitive in the sense of (1.1), with high probability.
In a similar manner we shall rely on the two-stage construction in order to prove that f R has a sharp threshold at p = 1 /2. The construction here will have to be slightly different, since we now want to vary the colour of certain points and not their presence. We will thus let η k denote the projection of η k to S, and instead aim to show that P[f R (η) = 1|η k ] grows from 0 to 1 in a narrow interval around p = 1 /2, with high probability. A first step in both these instances is obtained in the following lemma, which has its origins in [1], although the proof we present here is taken from [4].
Lemma 3.1. For every integer k ≥ 2 and p ∈ (0, 1) we have Proof. It all boils down to use a suitable construction for the pair (η k , η).
As an easy corollary of the lemma above we obtain the following.
Lemma 3.2. For every rectangle R ⊆ S there exists k 0 , depending only on the aspect ratio of R, such that if k ≥ k 0 , then we have, for all large n, that where c 1 is the constant in (1.2).
Proof. Chebyshev's inequality and Lemma 3.1 imply that for k and n large enough.

An algorithm with low revealment
In this section, we continue to work towards a proof of Theorem 1.1. We will adopt the two-stage construction introduced in the previous section, and devise an algorithm which, conditional on the denser set of points η k , determines the outcome of f R (η) by querying points of η k whether they are contained in the sparser set η. We then proceed to show that this algorithm has low revealment, which in the next section will allow us to deduce that f R is noise sensitive and has a threshold at p = 1 /2.

The algorithm
In this subsection we describe the algorithm. Loosely speaking, it will explore the square S until it has discovered all blue components that touch a randomly selected vertical line through R. This is achieved by querying points close to the vertical line first, and then proceeding to points that are close to already explored blue components connected to the vertical line. Since we cannot tell the Voronoi tessellation of η by just observing η k , we will only gain information about the actual tiling locally as we go. To contour this difficulty, we will split S into boxes on a mesoscopic scale (see Figure 1), so that by querying all points within such a box we will correctly determine the tiling within that box with high probability, apart from close to the boundary. That is, by further dividing each box into nine sub-boxes the thus learn the tiling of η correctly within the centre box with high probability.
If the algorithm discovers a blue component that touches both left and right sides of R, then there is a horizontal blue crossing of R. If not, then there is a vertical red crossing. The reason the algorithm has low revealment is that a given point is both unlikely to be close to the randomly located vertical line, and unlikely to be connected far by a blue path.
The rest of this section will be dedicated to confirming these claims. First we give a more precise description of our algorithm, see Algorithm 4.1. Recall that Ω is the collection of finite subsets of S × {0, 1}. Figure 1: The unit square divided into smaller squares at a mesoscopic scale. When all points of η k in a sub-square are queried, then the tiling within the center box in a further division into nine sub-boxes is correctly determined with high probability.  Proof. Observe that if there exists a horizontal blue crossing of R, then it necessarily crosses every vertical line through R. Hence, it suffices to verify that given η k the algorithm correctly determines all connected blue components of η inside R that intersect the random vertical line {x = x 0 }. If the algorithm queries all points of η k then this is trivially true. If not, then all we need to verify is that for each safe cell, i.e., a cell which is explored along with its eight surrounding neighbours, we have determined the tiling within. This is indeed the case since if no neighbouring cell has an empty subcell, then no point outside the safe cell and its eight neighbours can affect the tiling inside the safe cell. Now that we have an algorithm that determines f R , we need to bound its revealment. Since the algorithm only reveals the configuration inside cells of a mesoscopic lattice, we consider each such cell individually and bound the revealment of every point inside it at once. This is done in the next two subsections.

One-arm estimates
For a point in η k to be queried by the algorithm above one of the following three things would have to occur: Either there is a subcell of some cell of the lattice which does not contain any point of η, or it is contained in a cell 'close' to the random vertical line through R, or it is in a cell located 'far' from the line, but there exists a connected blue path in η connecting the vertical line with one of the eight cells that surround that cell. In this subsection we shall bound the probability of the third of these possibilities.
Let m = ⌈n 1 /4 ⌉ −1 as before, and partition S into squares of side length m. The precise choice of m is irrelevant as long as n − 1 /2 ≪ m ≪ 1. Let C ⊆ S be a cell in this lattice, and let C ′ be the square of side length 3m centered at C. We define Arm(C) as the event that there exists a blue path that connects C ′ to the boundary of the square of side √ m centered at C.

Proposition 4.2.
There exists δ > 0 such that, for every γ > 0, we can find k 0 ≥ 1 so that, for k ≥ k 0 , p ≤ 1 /2, and all large n, depending on k, we have Estimates of this type have previously been obtained in [1,2,4], and the proof presented here will be similar, although different in some details. It will suffice to consider the critical case p = 1 /2 due to monotonicity. As a first step, we prove a lemma that bounds the probability that a configuration contains a large cell. Let E := some cell of η has radius larger than n − 1 /3 . Proof. We split the unit square S into boxes of side length (10⌈n 1 /3 ⌉) −1 .
Notice that for E to occur it is necessary for the intersection of η with at least one of these about 100n 2/3 boxes to be empty. For each individual box this occurs with probability at most exp(−0.01n·n − 2 /3 ). Via the union bound we conclude that as required.
Proof of Proposition 4.2. Fix a cell C ⊆ S of side length m = ⌈n 1 /4 ⌉ −1 . For every integer j ≥ 0, denote by A j the square annulus centered around C, with inner side-length 4 j m and outer side-length 3 · 4 j m. Let O j be the event that there is not a blue path connecting the inner and outer boundary of A j . That is, O j is the even that there is a red path in A j that disconnects any blue component touching C from the exterior of A j . Observe that, in order for the event Arm(C) to occur, O j cannot occur for integers j in the set Let E be the event in (4.1), and let A ′ j denote the set of points within distance m/3 of A j . We note that, on E c , the events O j are determined by the restriction of η to A ′ j , which we shall denote η (j) . That is, if g j : Ω → {0, 1} denotes the indicator of O j , then Moreover, since the sets A ′ j are disjoint the configurations η (j) are independent. Since O j cannot occur for any j ∈ J in case that Arm(C) occurs, it follows that It remains to bound the probability that either D or D * occurs. By Markov's inequality and Lemma 4.3, Since nm 2 ≫ 1 and the annulus A j is the union of four rectangles with sides 3 · 4 j m and 4 j m, it follows from Lemma 3.2 and Harris' inequality that Here, it is important to observe that the bound is independent of the chosen annulus. Indeed, if the annulus is not entirely contained in S, then it would only be harder for blue to reach its outer boundary from within. We then observe that where the supremum above is taken over all subsets of J with at least |J|/2 elements. Repeated use of (4.3) shows that Hence, combined with the estimates in (4.5)-(4.7) we conclude that Since |J| = Ω(log n) we may for every γ > 0 choose k large so that the above estimate is bounded by n −γ for all large n.

Revealment of the algorithm
Now that we have the one-arm estimate, we can bound the revealment of our algorithm. We recall that a point in η k may be queried if the m × m cell in which it belongs is either 'close' to the random vertical line through R, or 'far' but connected by a blue path to that line, or if the algorithm at some point discovers a subcell of some m × m cell which is empty.
Proposition 4.4. Let A denote Algorithm 4.1. There exist δ > 0 and k 0 ≥ 1 such that, for every k ≥ k 0 , p ≤ 1 /2, and all large n, we have Proof. As before we partition the unit square S into cells of side length m, and split each cell C into nine further subcells. Let G be the event that each such subcell contains a point of η, and let Markov's inequality then gives that, for large n, Next we fix γ = 100 and let δ > 0 and k 0 ≥ 1 be as in Proposition 4.2. Let B ′ denote the event that for some m×m cell C, we have P[Arm(C)|η k ] > n −δ . The union bound and Proposition 4.2 then gives that for large n For a given m × m cell C we let D C be the event that C is within distance 2 √ m of the random line through R. The probability of D C is independent of η k , and one can obtain an upper bound of order √ m, uniformly in C.
For a point of η k to be queried there has either to exist a subcell of some m × m cell that is empty, or the point must lie in a cell C within distance 2 √ m of the randomly chosen vertical line through R, or Arm(C) has to occur. The revealment of A thus has to satisfy which restricted to the event (B ∩ B ′ ) c is at most n −1 + n − 1 /8 + n −δ .
We may analogously to the algorithm A define an algorithm A ′ which looks for a vertical red crossing of R. By symmetry it follows that, for p ≥ 1 /2, P kn,p δ1 /k (A ′ , η k ) > n −δ < n −50 .

Noise sensitivity and the threshold window
This section is devoted to the proof of Theorem 1.1. The proof is divided in two parts. First, we prove that Voronoi percolation is noise sensitive, with a positive noise sensitivity exponent. Then we bound the width of the threshold window. Throughout the section we work with the two-stage construction of the random Voronoi configuration, as described in Section 3.
Proof of Theorem 1.1 (first part). Due to Equation (3.1) and Lemma 3.1 it will suffice, for the first part of the theorem, to show that for some γ > 0 and all large k we have where ǫ n = n −γ . Let A be the algorithm in Algorithm 4.1. The Schramm-Steif revealment Theorem (Proposition 2.1) gives that, for almost every η k and m ≥ 1, we have Let δ > 0 be as in Proposition 4.4, and let B n denote the event that δ1 /k (A, η k ) > n −δ . Then P kn, 1 /2 [B n ] < n −50 , and consequently Hence, (5.1) holds with γ = δ/3 and n δ/3 ≪ m ≪ n δ/2 , which concludes the proof of the first part of Theorem 1.1.
We proceed with the proof of the second part.
Proof of Theorem 1.1 (second part). Given η k ∈ Ω we shall with η k denote its projection onto S. We first note that by dominated convergence we have d dp since the rate at which P[f R (η) = 1|η k ] may increase as p varies is bounded by the number of variables |η k | affected by p. Moreover, given η k , we may think of η as an element in {0, 1} η k × {0, 1} η k , where the first half of the coordinates determine 'colour' and the second half determine 'presence' in the final configuration. The Margulis-Russo formula then gives that d dp x is present and its colour is pivotal for f R η k almost surely. Since a blue point is better than no point, and no point is better than a red point, it follows that switching presence rather than colour of a point is less likely to affect the outcome of f R . Consequently, the derivative is bounded from below by the sum x∈η k P x is present and its presence is pivotal for f R η k .
Each term in the above expression can be rewritten as 1 , where the factor 1 /k comes from the probability of being present. Hence, (5.2) and the OSSS inequality (Proposition 2.2) together give that d dp Fix ǫ > 0 and let I ǫ = I ǫ (n) denote the set of points p ∈ [0, 1] for which P n,p [f R = 1] ∈ (ǫ, 1 − ǫ). By monotonicity I ǫ is an interval, and for small ǫ the interval contains the point 1 /2. Consequently, to complete the proof it will suffice to show that there exists γ > 0 such that |I ǫ | ≤ n −γ for all ǫ > 0.
Let A be the algorithm in Algorithm 4.1, and A ′ be the analogously defined algorithm that looks for a vertical red crossing of R. We introduce the events with which (5.3) reduces to d dp Next we fix k ≥ 16/ǫ 2 . By Chebyshev's inequality and Lemma 3.1 we then have, for all p ∈ I ǫ , that and hence that |I ǫ | ≤ 2k/(ǫ 2 n δ ). Since ǫ > 0 was arbitrary, the theorem follows with γ = δ/2.
We can also study the behaviour of f R (η) for fixed values of k. Besides, there exists δ > 0 such that for all p ∈ [0, 1], k > 1 and all large n P kn,p δ1 /k (A, η k ) > n −δ < n −50 .
Proof. For p = 1 /2 the first statement of the proposition is immediate from (3.2) and the first statement of Theorem 1.1. For p = 1 /2 it is a trivial consequence of the second statement of Theorem 1.1.
As for the second statement, it is necessary to go through the arguments in Section 4 again, and notice that the only place where k needs to be large is in (4.6). Due to the first part of this proposition, we may modify Lemma 3.2, as pointed out in Remark 3.3, to obtain that the probability in (4.6) is small for every k > 1 and n large.

Square-root stability
In Section 5, we concluded the proof of Theorem 1.1, and the remainder of this paper will aim to establish Theorem 1.2. The first step in this direction is to establish a result that roughly states that f R is stable with respect to perturbations that act independently and uniformly on each of the two colours and change at most order square-root of the points.
Throughout this section we shall use the notation ξ := {x ∈ S : (x, 0) ∈ η} and ζ := {x ∈ S : (x, 1) ∈ η} to denote the set of red and blue points respectively, and identify η with the pair (ξ, ζ) when appropriate. Proposition 6.1. Let η ′ = (ξ ′ , ζ ′ ) and η = (ξ, ζ) be a pair of configurations in Ω, chosen according to P n, 1 /2 , and whose joint law satisfies the following properties, stated only for the ξ-coordinates: (i) Given ξ, the distribution of ξ ∩ ξ ′ is invariant by permutations of ξ, and, conditioned on its size, the set ξ ′ \ ξ is formed by independently and uniformly distributed points in S. (ii) For every δ > 0, there exists a constant C such that, for all large n, where ξ ′ △ ξ is the symmetric difference between the two sets.
If, in addition, the pairs (ζ, ζ ′ ) and (ξ, ξ ′ ) are independent, then, for any rectangle R ⊆ S, we have The square-root scale that figures in the theorem is meaningful in the sense that √ n is an upper bound on the derivative of a monotone Boolean function on n bits. Consequently, the threshold window cannot have a width smaller than 1/ √ n, and noise sensitive monotone functions have a window that is strictly wider (cf. (2.6)). Hence, a uniform perturbation that involve order √ n bits is therefore too small to affect the outcome of the function.
The above heuristic has been made precise in the setting of Boolean functions in a paper by Broman, Garban and Steif [7, Lemma 6.1]. We shall prove Proposition 6.1 via a suitable two-stage construction in which a version of the result from [7] can be applied. (iv) for every δ > 0 there exists a constant C such that, for all large n and all i = 1, 2, . . . , k; Then, for every ǫ > 0, there exists a constantC such that, for all large n and any function f : Combined with (2.6) the bound in (6.2) may be expressed in terms of the sum of influences squared or the revealment of algorithms.
Proof. The case k = 1 is the statement of Lemma 6.1 in [7] (with the additional hypothesis that ω * is uniform in {0, 1} n ). The remaining cases follows from induction on k.
Fix some k ≥ 2, assume the result is true for all j ≤ k and fix a partition A 1 , A 2 , . . . , A k+1 . Denote byÃ = [n] \ A k+1 and ω = (ωÃ, ω A k+1 ) for the restrictions of ω to the setsÃ and A k+1 . Observe that To bound the first probability in the last expression above, we apply the induction hypothesis conditioned on ω A k+1 and use that ω A k+1 is uniformly distributed in {0, 1} A k+1 to obtain Analogous computations for the last term in (6.3) concludes the proof.
We now focus on the proof of Proposition 6.1.
Proof of Proposition 6.1. The first step of the proof is to find a suitable construction of the pairs (ζ, ζ ′ ) and (ξ, ξ ′ ). Since the perturbation acts independently on the two colours, this construction can be done separately. For this purpose, let M = |ξ ′ ∩ ξ| and N = |ξ ′ \ ξ|. Let ξ 2 be a Poisson point process on S with intensity measure nλ S , and let ξ andξ be uniformly chosen subsets of ξ 2 . Given |ξ|, sample the pair (M, N) according to the right conditional law. Next, choose uniformly a subset ξ A ⊆ ξ of size M and let ξ B be a uniformly chosen subset of ξ 2 \ ξ of size min{N, |ξ 2 \ ξ|}. Besides, let ξ C be a collection of N independent and uniformly chosen points of the square S. Now set and Construct the collection (ζ, ζ ′′ , ζ ′′′ ) analogously, and note that (ζ, ζ ′′′ ) and (ξ, ξ ′′′ ) have the correct joint distribution.
In the next step, we note that ξ ′′′ = ξ ′′ with probability tending to 1. To see this, fix ǫ > 0 and notice that N and |ξ 2 \ ξ| are independent, and that the latter is Poisson with parameter n /2. Then, by assumption (ii) we have for all large n. The above construction thus gives, for large n, that Conditional on (ζ 2 , ξ 2 ) the pairs (ζ, ζ ′′ ) and (ξ, ξ ′′ ) can be thought of as pairs of elements in {0, 1} ζ 2 and {0, 1} ξ 2 respectively. The last step of the proof will thus be to apply Lemma 6.2 to bound the last probability above. In preparation for this, set δ m := ǫ2 −2m and let C m be the constant in hypothesis (ii) that corresponds to δ m . Let Clearly |ξ △ ξ ′′ | is equal to |ξ △ ξ ′′′ | on the event where N ≤ |ξ 2 \ ξ|. Hence, the union bound and Markov's inequality give, for large n, that Let also B 2 := {|ξ 2 | / ∈ [ n /2, 2n]} and define the analogous eventsB 1 andB 2 to the collection ζ 2 . On the event G := (B 1 ∪ B 2 ∪B 1 ∪B 2 ) c , Lemma 6.2 combined with (6.2) can be applied and it gives that, for large n, By combining the last equation above with (6.5) and (2.6) we obtain which by Proposition 5.1 is no larger than 9ǫ when n is large. Since ǫ > 0 was arbitrary, the proof is complete.

Conservative dynamics and related topics
This final section is devoted to different perturbations in our model.
Thinning and sprinkling. We begin with a comment on nonconservative and time dependent dynamics. We saw in Section 5 that sensitivity with respect to thinning a configuration uniformly is equivalent to the usual concept of noise sensitivity. We here complement that observation by showing that the same is true for sprinkling.
Perturbing the colours. We shall briefly describe the results in [2], and explain how they imply that the crossing function is sensitive with respect to re-randomizing a small proportion of the colours of the points. That is, if η ′ is obtained from η ∈ Ω by resampling the second coordinate of each point (x, u) ∈ η independently and uniformly with probability ǫ > 0, then Given η ∈ Ω, let η denote the projection onto S. Then, In [2], the authors show that both expressions in the above right-hand side vanish as n → ∞, and hence prove (7.1). That the variance term tends to zero shows that observing the tiling but not the colouring of a Voronoi configuration typically gives very little information about whether a colouring will typically produce a horizontal blue crossing or not, and confirms a conjecture of Benjamini, Kalai and Schramm [5]. The latter is essentially a statement of noise sensitivity of the crossing function in a quenched sense. One may show that noise sensitivity in the sense of (1.3) follows from that statement. However, we emphasize that the techniques used there are more restrictive than the techniques used here, as they are based on a colour-switching trick. It is therefore motivated to present an alternative proof, as we have done here, that applies in a wide range of settings.
Perturbing the positions. We now turn to the proof of Theorem 1.2. The proof will be based on Proposition 6.1, which emphasizes a close relation to the exclusion sensitivity studied in [7].
Proof of Theorem 1.2. We shall show that the crossing function f R is sensitive with respect to re-randomizing the positions of a small proportion of the points. This type of perturbation is conservative in the sense that the number of points of each colour is kept constant. Our goal will be to construct the process in a suitable manner, and then apply Proposition 6.1.