A lower bound for disconnection by random interlacements

We consider the vacant set of random interlacements on Z^d, with d bigger or equal to 3, in the percolative regime. Motivated by the large deviation principles obtained in our recent work arXiv:1304.7477, we investigate the asymptotic behavior of the probability that a large body gets disconnected from infinity by the random interlacements. We derive an asymptotic lower bound, which brings into play tilted interlacements, and relates the problem to some of the large deviations of the occupation-time profile considered in arXiv:1304.7477.


Introduction
Random interlacements constitute a percolation model with long-range dependence, and the percolative properties of their vacant set play an important role in the investigation of several questions of disconnection or fragmentation created by random walks, see [5], [19], [23]. Here, we consider random interlacements on Z d , d ≥ 3. It is by now well-known that as one increases the level u of the interlacements, the percolative properties of the vacant set undergo a phase transition, and the model evolves from a percolative phase to a non-percolative phase, see [20] and [17]. In the present work, we are mainly interested in the percolative phase of the model, and we derive an asymptotic lower bound on the probability that a macroscopic body has no connection to infinity in the vacant set. Strikingly, this lower bound corresponds to certain large deviations of the occupation-time profile of random interlacements investigated in our previous work [13], where we analyzed the exponential decay of the probability that a macroscopic body gets insulated by high values of the (regularized) occupation-time profile.
We now describe the model and our results in a more precise fashion. We refer to Section 1 for precise definitions. We consider continuous-time random interlacements on Z d , d ≥ 3. We denote by P u the canonical law of random interlacements at level u > 0, and by I u and V u = Z d \I u the corresponding interlacement set and vacant set. It is known that there is a critical value u * * ∈ (0, ∞), which can be characterized as the infimum of the levels u > 0 for which the probability that the vacant cluster at the origin reaches distance N from the origin has a stretched exponential decay in N , see [18]. It is an important open question whether u * * actually coincides with the critical level u * for the percolation of the vacant set (but it is a simple fact that u * ≤ u * * ).
In this work, we are primarily interested in the percolative regime of the vacant set, but, specifically, we assume that 0 < u ≤ u * * (because our lower bound on disconnection actually provides information in this possibly wider range of levels).
We consider a compact subset K of R d , and its discrete blow-up: where N K denotes the homothetic of ratio N of the set K, and d ∞ (z, N K) = inf y∈N K |z − y| ∞ stands for the sup-norm distance of z to N K. Of central interest for us is the event stating that K N is not connected to infinity in V u , which we denote by The main result of this article is the following asymptotic lower bound.
Theorem 0.1. For u ∈ (0, u * * ] one has where cap R d (K) stands for the Brownian capacity of K.
In essence, the lower bound (0.3) replicates the asymptotic behavior of the probability that the regularized occupation-time profile of random interlacements insulates K by values exceeding u * * , see Theorems 6.2 and 6.4, as well as Remarks 6.5 2) and 6.5 5) of [13]. It is a remarkable feature that such large deviations of the occupationtime profile induce a "thickening" of the interlacement surrounding K N , rather than a mere change of the clocks governing the time spent by the trajectories defining the interlacement. This thickening is potent enough to typically disconnect K N from infinity. We refer to Remark 2.5 for more on this topic. It is of course an important question, whether there is a matching upper bound to (0.3), when K is a smooth compact, and whether the large deviations of the occupation-time profile capture the main mechanism through which I u disconnects a macroscopic body from infinity.
Incidentally, the tilted interlacements, which we heavily use in this work, come up as a kind of slowly space-modulated random interlacements. Possibly, they offer, in a discrete set-up, a microscopic model for the type of "Swiss cheese" picture advocated in [3], when studying the moderate deviations of the volume of the Wiener sausage (however the relevant modulating functions in [3] and in the present work correspond to distinct variational problems and are different).
One may also compare Theorem 0.1 to corresponding results for supercritical Bernoulli percolation. Unlike what happens in the present set-up, disconnecting a large macroscopic body in the percolative phase (when K is a smooth compact) would involve an exponential cost proportional to N d−1 , in the spirit of the study of the existence of a large finite cluster at the origin, see p. 216 of [10], or Theorem 2.5, p. 16 of [4].
Further, it is interesting to note that when u → 0, the right-hand side of (0.3) has a finite limit. One may wonder about the relation of this limit to what happens in our original problem when one replaces I u by a single random walk trajectory (starting for instance at the origin), that is, when we consider the probability that K N is disconnected from infinity by the trajectory of one single random walk starting at the origin. We refer to Remark 5.1 2) for more on this question.
We briefly comment on the proofs. The main strategy is to use a change of probability and an entropy bound. We construct through fine-tuned Radon-Nikodym derivatives new measures P N corresponding to "tilted random interlacements", which have the crucial property that under P N the disconnection probability tends to 1 as N goes to infinity: Then, by a classical inequality (see (1.61)), one has a lower bound for the disconnection probability in terms of the relative entropy: We relate the relative entropy of P N with respect to P u , to the Brownian capacity of K, and show in Propositions 2.3 and 2.4 that (where lim refers to certain successive limiting procedures involving N first, and then various auxiliary parameters entering the construction of P N ).
The measure P N governing the tilted interlacements is constructed in Section 2. Intuitively, it forces a "local level" of interlacements corresponding to u * * + ǫ, in a "fence" surrounding K N . This creates a strongly non-percolative region surrounding K N and leads to (0.4). Of course, a substantial part of the work is to make sense of the above heuristics. This goes through a local comparison at a mesoscopic scale between the occupied set of tilted interlacements and standard interlacements at a level exceeding u * * .
In particular, we show in Proposition 4.1 that for all mesoscopic boxes B 1 , with size N r 1 (with r 1 small) and center in Γ N , a "fence" around K N , one has a couplinḡ Q between I 1 , distributed as I u * * +ǫ/8 ∩ B 1 , and I, distributed as the intersection of the titled interlacement set with B 1 , so that The proof of this key stochastic domination bound relies on two main ingredients. On the one hand, it involves a comparison of equilibrium measures, see Proposition 3.5, which itself relies on a comparison of capacities on a slightly larger mesoscopic scale, see Proposition 3.1. On the other hand, it involves a domination of I u * * +ǫ/8 ∩B 1 by the trace on B 1 of a suitable Poisson point process of excursions of the simple random walk starting on the boundary of B 1 up to their exit from a larger box B 2 . For this last step we can rely on results of [1].
We will now explain how this article is organized. In Section 1 we introduce notation and make a brief review of results concerning continuous-time random walk, Green function, continuous-time random interlacements, as well as other useful facts and tools. Section 2 is devoted to the construction of the probability measure governing the tilted random interlacements. We also compute and obtain asymptotic estimates on the relative entropy, see Propositions 2.3 and 2.4. In Section 3 we derive a comparison of capacities in Proposition 3.1, and, subsequently, of equilibrium measures in Proposition 3.4. The latter proposition plays a crucial role in the construction of the coupling in the next section. In Section 4 we prove (0.7) in Proposition 4.1, and the crucial statement (0.4) in Theorem 4.3. In the short Section 5 we assemble the various pieces and prove the main theorem.
Finally, we explain the convention we use concerning constants. We denote by c, c ′ ,c, c . . . positive constants with values changing from place to place, and by c 0 , c 1 , . . . positive constants which are fixed and refer to the value as they first appear. Throughout the article the constants depend on the dimension d. Dependence on additional constants are stated explicitly in the notation.

Some useful facts
Throughout the article we assume d ≥ 3. In this section we introduce further notation and useful facts, in particular concerning continuous time random walk on Z d and its potential theory. The Lemma 1.1 concerns the occupation-times of balls and will be used in Section 3. Moreover, we introduce another continuous-time reversible Markov chain on Z d , which will play a crucial role in the upcoming sections, and we state some useful results regarding its potential theory. We also recall the definition and basic facts concerning continuous time random interlacements. We end this section by stating some results about relative entropy and Poisson point processes. We start with some notation. We let N = {0, 1, . . .} stand for the set of natural numbers. We write |·| and |·| ∞ for the Euclidean and l ∞ -norms on R d . We denote by B(x, r) = {y ∈ Z d ; |x − y| ≤ r} the closed Euclidean ball of radius r ≥ 0 intersected with Z d , and respectively by B ∞ (x, r) = {y ∈ Z d , |x − y| ∞ ≤ r} the closed l ∞ -ball of radius r intersected with Z d . When U is a subset of Z d , we write |U | for the cardinality of U , and U ⊂⊂ Z d means that U is a finite subset of Z d . We denote by ∂U (resp. ∂ i U ) the boundary (resp. internal boundary) of U , and by U its "closure": Euclidean ball of radius r in R d . We also introduce the N -discrete blow-up of U as where N U = {N z; z ∈ U } denotes the homothetic of U .
We will now collect some notation concerning connectivity properties. We call π : {1, . . . n} → Z d , with n ≥ 1, a nearest-neighbor path, when |π(i) − π(i − 1)| = 1, for 1 < i ≤ n. Given K, L, U subsets of Z d , we say that K and L are connected by U and write K U ↔ L, if there exists a finite nearest-neighbor path π in Z d such that π(1) belongs to K and π(n) belongs to L, and for all k in {1, · · · , n}, π(k) belongs to U . Otherwise, we say that K and L are not connected by U , and write K U L. Similarly, for K, U ⊂ Z d , we say that K is connected to infinity by U , if K U ↔ B(0, N ) c for all N , and write K U ↔ ∞. Otherwise, we say that K is not connected to infinity by U , and denote it by K U ∞.
We now turn to the definition of some path spaces, and of the continuous-time simple random walk. We consider W + and W the spaces of infinite (resp. doublyinfinite) (Z d )× (0, ∞)-valued sequences such that the first coordinate of the sequence forms an infinite (resp. doubly-infinite) nearest-neighbor path in Z d , spending finite time in any finite subset of Z d , and the sequence of the second coordinate has an infinite sum (resp. infinite "forward" and "backward" sums). The second coordinate describes the duration at each step corresponding to the first coordinate. We denote by W + and W the respective σ-algebras generated by the coordinate maps. We denote by P x the law on W + under which Z n , n ≥ 0, has the law of the simple random walk on Z d , starting from x, and ζ n , n ≥ 0, are i.i.d. exponential variables with parameter 1, independent from Z n , n ≥ 0. We denote by E x the corresponding expectation. Moreover, if α is a measure on Z d , we denote by P α and E α the measure x∈Z d α(x)P x (not necessarily a probability measure) and its corresponding "expectation" (i.e. the integral with respect to the measure P α ).
We attach to w ∈ W + a continuous-time process (X t ) t≥0 , and call it the random walk on Z d with constant jump rate 1 under P x , through the following relations , the left sum term is understood as 0). We also introduce the filtration Given U ⊆ Z d , and w ∈ W + , we write H U ( w) = inf{t ≥ 0; X t ( w) ∈ U } and T U = inf{t ≥ 0; X t ( w) / ∈ U } for the entrance time in U and exit time from U . Moreover, we write H U = inf{s ≥ ζ 1 ; X s ∈ U } for the hitting time of U .
For U ⊂ Z d , we write Γ(U ) for the space of all right-continuous, piecewise constant functions from [0, ∞) to U , with finitely many jumps on any compact interval. We will also denote by (X t ) t≥0 the canonical coordinate process on Γ(U ), and whenever an ambiguity arises, we will specify on which space we are working.
We denote by g(·, ·) and g U (·, ·) the Green function of the walk, and the killed Green function of the walk upon leaving U , It is known that g is translation invariant. Moreover, both g and g U are symmetric and finite, that is, When x tends to infinity, one knows that (see, e.g. p. 153, Proposition 6.3.1 of [12]) is the Green function with a pole at the origin, attached to Brownian motion, and We also have the following estimate on the killed Green function (see p. 157, Proposition 6.3.5 of [12]): for x ∈ B(0, N ), (1.10) We further recall the definitions of equilibrium measure and capacity, and refer to Section 2, Chapter 2 of [11] for more details. Given M ⊂⊂ Z d , and we write e M for the equilibrium measure of M : There is also an equivalent definition of capacity through the Dirichlet form: is the discrete Dirichlet form for simple random walk.
Moreover, the probability of entering M can be expressed as and in particular, when x ∈ M , we have We now introduce some notation for (killed) entrance measures.
The equilibrium measure also satisfies the sweeping identity (for instance, seen as a consequence of (1.46) in [20]), namely, for M ⊂ M ′ ⊂⊂ Z d , y ∈ M , using the notation from above (1.3), The next lemma will be useful in Section 3, see Proposition 3.1. It provides an asymptotic estimate on the expected time a random walk starting at the boundary of a ball of large radius spends in this ball. We recall the convention on constants stated at the end of the Introduction.
Proof. For simplicity, we fix x in this proof and write B(0, N ) = B. We set We split B into two parts: In B I , we use a crude upper bound for g(x, ·), derived from (1.7), As a result, we find that Letx = N |x| x denote the projection of x onto the Euclidean sphere of radius N centered at 0. It is straightforward to see that By the asymptotic approximation of discrete Green function (see (1.7) and (1.8)), , we obtain with a Riemann sum approximation argument that (1.24) Thanks to the scaling property and rotation invariance of Brownian motion, writing By the definition of r N in (1.20), we obtain (1.19) as desired.
We now introduce a positive martingale, which plays an important role in the definition of the tilted interlacements in the next section. We will show in the lemma below that this martingale is uniformly integrable, and we will use its limiting value as a probability density.
Given a real-valued function h on Z d , we denote its discrete Laplacian by We consider a positive function f on Z d , which is equal to 1 outside a finite set, and we write We also introduce the stochastic process and define for all x ∈ Z d , T > 0 the positive measure P x,T (on W + with density M T with respect to P x ): The next lemma plays an important role in the construction of the tilted interlacements. Moreover, Proof. The first claim (1.31) is classical. It follows for instance from Lemma 3.2, p. 174 in Chapter 4 of [8]. Note that E x [M 0 ] = 1, so P x,T is a probability measure for each T . Using the Markov property of X under P x and (1.31), it readily follows that (X t ) 0≤t≤T under P x,T is a Markov chain. By Theorem 2.5, p. 61 of [6], its semigroup (acting on the Banach space of functions on Z d tending to zero at infinity) has a generator given by the bounded operator: (1.34) We introduce the law Q x on Γ(Z d ) of the jump process starting from x, corresponding to the generator L defined as in (1.34). Outside some finite set f = 1, and by (1.34), outside the (discrete) closure of this finite set, this process jumps as a simple random walk. As a result, the canonical jump process attached to Q x is transient. In addition, up to time T , it has the same law as (X t ) 0≤t≤T under P x,T .
Therefore, the claim (1.32) will follow once we show that The first term after the second equality of (1.36) is zero since g(X t ) − g(X 0 ) − t 0 Lg(X s )ds is a martingale under Q x (see Proposition 1.7, p. 162 of [8]). As for the second term, we write By (1.34) we see that Hence, with a straightforward calculation and the fact that we see that and that ψ(x) is finitely supported.
Therefore, due to the transience of the canonical process under Q x , The last claim (1.33) follows by uniform integrability. Indeed, the martingale converges P x -a.s. and in L 1 (P x ) towards We thus define for all x in Z d the positive measure on W + : The following corollary is a consequence of Lemma 1.2 and its proof.
and its semi-group in L 2 ( λ) has the bounded generator Similar to the results in potential theory for the continuous-time simple random walk earlier in this section, we can also define for (X t ) t≥0 under { P x } x∈Z d the corresponding notions such as (killed) Green function, equilibrium measure, and capacity. We also refer to Section 2.1 and 2.2 of Chapter 2 and Section 4.2 of Chapter 4 of [9] for more details. We denote the corresponding objects with a tilde, and refer to them as tilted objects.
Specifically, we write g and g U for the tilted Green function and killed Green function (outside U ⊆ Z d ): One knows that g and g U are symmetric and finite. Given M ⊂⊂ Z d , the tilted equilibrium measure and tilted capacity of M are defined as: are the respective tilted entrance measure in A and tilted entrance measure in A relative to B, when starting at x.
We now turn to continuous-time random interlacements. We refer to [22] for more details. We define W * = W / ∼, where w ∼ w ′ is defined as w(·) = w ′ (· + k) for some k ∈ Z, for w, w ′ ∈ W . We also define the canonical map as π * : W → W * . We write W * M for the subset of W * of trajectories modulo time-shift that intersect M ⊂⊂ Z d . For w * ∈ W * M , we write w * M,+ for the unique element of W + , which follows w * step by step from the first time it enters M .
The continuous-time random interlacement can be seen as a Poisson point process on the space W * , with intensity measure u ν, where u > 0 and ν is a σ-finite measure on W such that its restriction to W * M (denoted by ν M ), is equal to π * • Q M , where Q M is a finite measure on W such that (see (1.7) in [22]) if (X t ) t∈R , is the continuous-time process attached to w ∈ W , then and when e M (x) > 0, under Q M conditioned on X 0 = x, (X t ) t≥0 and the right-continuous regularization of (X −t ) t>0 are independent and have same respective distribution as (X t ) t≥0 under P x and X after its first jump under We define the space Ω of point measures on W * as If F : W * → R and ω = i δ w * i , we write < ω, F >= i F ( w * i ) for the integral of F with respect to ω. Given M ⊂⊂ Z d and ω = i≥0 δ w * i in Ω, we let µ M ( ω) stand for the point measure on W + , µ M ( ω) = i≥0 1 w * i ∈ W * M δ ( w * i ) M,+ , which collects the cloud of onward trajectories after the first entrance in M (see below (1.53) for notation).
We write P u for the probability measure governing random interlacements at level u, that is the canonical law on Ω of the Poisson point process on W * with intensity measure u ν. We write E u for its expectation. Given ω = i δ w * i , we define the interlacement set and vacant set at level u respectively as the random subsets of Z d : where for w * in W * , Range( w * ) stands for the set of points in Z d visited by any w in W with π * ( w) = w * , and The above random sets have the same law as I u or V u in [20].
The connectivity function of the vacant set of random interlacements is known to have a stretched-exponential decay when the level exceeds a certain critical value (see Theorem 4.1 of [21], or Theorem 0.1 of [18], and Theorem 3.1 of [14] for recent developments). Namely, there exists a u * * ∈ (0, ∞), which, for our purpose in this article, can be characterized as the smallest positive number such that for all u > u * * , , for all N ≥ 0.
We also wish to recall a classical result on relative entropy which will be helpful in Section 2. For P absolutely continuous with respect to P, the relative entropy of P with respect to P is defined as For an event A with positive P-probability, we have the following inequality (see p. 76 of [7]): We end this section by recalling one property of the Poisson point process on general spaces. It rephrases Lemma 1.4 of [13]. Let µ be a Poisson point process on E with finite intensity measure η (i.e. η(E) < ∞), and let Φ : E → R be a measurable function. Then, one has (this is an identity in (0, +∞]).

The tilted interlacements
In this section, we define a new probability measure P N on W * , which is absolutely continuous with respect to P u , see Proposition 2.1. It governs a Poisson point process on W * , which corresponds to the "tilted random interlacements". Intuitively, these tilted interlacements describe a kind of slowly space-modulated random interlacements. The motivation for the exponential tilt entering the definition of P N actually stems from the analysis of certain large deviations of the occupation-time profile of random interlacements considered in [13], see Remark 2.5 below. In Proposition 2.1 we compute the relative entropy of P N with respect to P u , and we then relate this result to the capacity of K after a suitable limiting procedure, see Proposition 2.4.
We begin with the construction of the new measure P N , which will correspond to an exponential tilt of P u , see (2.7).
We recall that K is a compact subset of R d as above (0.1). We consider δ, ǫ in (0, 1), and let U and U be the open Euclidean balls centered at 0 with respective radii r U and r U , where r U > 0 and r U = r U + 4. We assume that r U is sufficiently large such that K 2δ ⊂ U ⊂ U ⊂ R d (recall that K 2δ stands for the closed 2δ-neighborhood of K, see below (1.1)). By the end of this section we will eventually let r U , r U tend to infinity and then let δ tend to 0. We denote by W z the Wiener measure starting from z and by H F , for F a closed subset of R d , the entrance time of the canonical Brownian motion in F . We write for the equilibrium potential of K 2δ relative to U . For η ∈ (0, δ) and φ η a nonnegative smooth function supported in B R d (0, η) such that´φ η (z)dz = 1, we write for the convolution of h and φ η .
We then define the restriction to Z d of the blow-up of h as We now specify our choice of f in (1.29) as and recall that f and V tacitly depend upon ǫ, δ, η, N . We drop this dependence from the notation for the sake of simplicity. We denote by U N the discrete blow-up of U (as in (0.1) or (1.2)). We also note that From now on, we will denote by P x the probability measure defined in (1.44), with f as in (2.4).
We define a function F on W * through , with π * ( w) = w * , and w U N the time-shift of w at its first entrance in U N , 0, otherwise.
We refer to (1.56) for the definition of Ω.
Moreover, under P N Proof. We begin with the proof of (2.7). By the first equality of (2.5) and using (1.33) of Lemma 1.2, we see that for all x ∈ ∂ i U N , . We now turn to the proof of (2.8).
Writing E N as the expectation under P N , taking G a non-negative, measurable function on W * , we have This identifies the Laplace transform of ω under P N and (2.8) follows by Proposition 36, p. 130 of [16].
There remains to prove (2.9). By (2.8) and the definition of µ M (below (1.56)), we see that µ M is a Poisson point process on W + with intensity measure uγ M , where γ M is the image of 1 W * M ν under the map w * → w * M,+ (see above (1.54) for notation). The claim (2.9) will thus follow once we show that (2.14) γ M = P e M .
We introduce M = M ∪ U N . We observe that Indeed, this follows by (1.11) and (1.49), together with the first equality in (2.5).
We also note that in (2.6) the function F does not change if we replace U N in the definition by M , since U N ⊂ M , and V vanishes outside U N . Therefore, in order to prove (2.14), it suffices to verify that for any bounded measurable function g : W + → R, its integral with respect to γ M coincides with that with respect to P e M . We begin with γ M , g . By the definition of γ M : (2.16) where for w ∈ W + , we let w M ∈ W + stand for the time-shift of w starting at its first entrance in M . We then apply the strong Markov property at H M , and decompose according to where the walks enter M , (2.17) On the other hand, we can express P e M in terms of the tilted entrance measure by the sweeping identity (see (1.52)) and incorporate the fact that the tilted equilibrium measure of M coincides with the standard equilibrium measure of M : Comparing (2.17) and (2.18), we obtain (2.14).
We will call the canonical Poisson point process under P N the tilted random interlacements.
Remark 2.2. The tilted interlacements do retain an interlacement-like character because ν = e F ν is a measure on W * , which has the following property. Its restriction to W * M , for M ⊂⊂ Z d , is equal to π * • Q M , where and when e M (x) > 0, (2.20) under Q M conditioned on X 0 = x, (X t ) t≥0 and the right-continuous regularization of (X −t ) t>0 are independent and with same respective distribution as (X t ) t≥0 under P x and X after its first jump under We do not need the above fact, but mention it because it states the property analogous to (1.54) and (1.55) satisfied byν.
We will now calculate the relative entropy of P N with regard to P u and relate it to the Dirichlet form of h N (see (1.14) for notation).
Proof. By the definition of relative entropy (see (1.60)), and (2.23) We also have, by the definition of f in (2.4), that and since h N is finitely supported, by the Green-Gauss theorem, the left-hand side of (2.24) equals and (2.21) follows.
We will now successively let N → ∞, η → 0, r U → ∞, and δ → 0. The capacity of K will appear in the limit (in the above sense) of the properly scaled Dirichlet form of h N .
Proof. First, by the definition of h N and (1.14) we have Then, we take the limit of both sides. By the smoothness of h η and a Riemann sum argument we have: where E R d (·, ·) denotes the usual Dirichlet form on R d .
Since h in (2.1) belongs to H 1 (R d ), see Theorem 4.3.3, p. 152 of [9] (due to the killing outside of U , the extended Dirichlet space is contained in H 1 (R d )), h η → h in H 1 (R d ), as η → 0. We thus find that where cap R d ,U (K 2δ ) is the relative capacity of K 2δ with respect to U , and the last equality follows from [9], pp. 152 and 71.
Letting r U → ∞ , the relative capacity converges to the usual Brownian capacity (this follows for instance from the variational characterization of the capacity in Theorem 2.1.5 on pp. 70 and 71 of [9]): Then, letting δ → 0, by Proposition 1.13, p. 60 of [15], we find that The claim (2.26) follows.
Remark 2.5. Our main objective in the next two sections is to prove (0.4), i.e. P N [A N ] → 1. Actually, we could also use the above P N (with a > u in place of u * * in the definition of f in (2.4)) and the change of probability method to provide an alternative proof of Theorem 6.4 of [13] (it derives the asymptotic lower bound for the probability that the regularized occupation-time profile of random interlacements insulates K by values exceeding a). It is a remarkable feature that such a bulge of the occupation-time profile is constructed in the tilted interlacements by mostly steering the tilted walk towards K N , and not by seriously tinkering the jump rates, see for instance (1.47), as well as Propositions 3.1 and 3.4 in the next section.

Domination of equilibrium measures
In this section, our main goal is Proposition 3.4, where we prove that on a mesoscopic box inside K δ N , the tilted equilibrium measure dominates (u * * + ǫ/4)/u times the corresponding standard equilibrium measure. It is the key ingredient for constructing the coupling in Proposition 4.1 in the next section. A major step is achieved in Proposition 3.1, where we prove that the tilted capacity of a mesoscopic ball (larger than the above mentioned box) inside K δ N is at least (u * * + ǫ/2)/u times its corresponding standard capacity.
We start with the precise definition of the objects of interest in this and the next section. We denote by Γ N = ∂K δ/2 N the boundary in Z d of the discrete blow-up of K δ 2 (we recall (1.1) and (1.2) for the definitions of the boundary and of the discrete blow-up). The above Γ N will serve as a set "surrounding" K N . We fix numbers r i , i = 1, . . . , 4 such that (3.1) 0 < 2r 1 < r 2 < r 3 < r 4 < 1 We define for x in Γ N two boxes centered at x (when there is ambiguity we add a superscript for its center x, and B 2 will only be used in Section 4): and three balls also centered at x: so that (in the notation of (1.1)) one has (we now tacitly assume that N is sufficiently large so that for all x ∈ Γ N , B x 5 ⊂ K δ N , and the second equality of (2.5) holds).
We start with the domination of capacities. To prove the next Proposition 3.1, we calculate the time spent by the random walk in the mesoscopic body B 3 in two different ways (see Lemma 3.2), and relate these expressions to the capacity and to the tilted capacity.
The proof of this proposition relies on Lemmas 3.2 and 3.3.
In the second lemma we prove that starting from the boundary of B 4 , the tilted walk hits B 3 with a probability tending to 0 with N .  Proof. For v in ∂B 4 , we have By the second equality of (2.5), and in view of (1.47), (3.4), when starting in v ∈ B 4 , under P v , X ·∧T B 5 behaves as stopped simple random walk. Thus, by classical simple random walk estimates, we have an upper bound for the probability that the tilted walk hits B 3 before exiting B 5 : ).
By the strong Markov property successively applied at times T B 5 and H B 4 , we have: Taking the maximum over v in ∂B 4 on the left-hand side of (3.13), and inserting this bound in (3.11), we find with the help of (3.12): (3.14) max To prove (3.10), it now suffices to show that As a result of (1.7) and the stopping theorem, for large N , and any x ∈ Γ N , By a similar argument as in Lemma 1.1, By the Chebyshev Inequality, writing c( U ) = 2c( U )/c, with c as in (3.16), and With (3.16) and (3.18) put together, we obtain that for all z in ∂B 5 , By definition of f (see (2.4)) and since h η ∈ C ∞ 0 , we see that By the first equality of (2.5), we have ∆ dis f = 0 outside U N . Hence, we find that for large N , for all x ∈ Γ N and y ∈ ∂B 5 , on the event I N , Therefore, by (3.19, (3.21) we find that This proves (3.15) and concludes the proof of Lemma 3.3.
With all ingredients prepared, we are ready to prove the domination of capacities stated in Proposition 3.1. In the proof we combine the estimates obtained in Lemmas 1.1 and 3.2, perform an argument similar to (3.11), (3.12) and (3.13), and employ Lemma 3.3 to control the tilted return probability.
Proof of Proposition 3.1. We will bound the left term of (3.6) from above and the right term from below. We start with the upper bound on the left-hand side of (3.6).
For all y in ∂ i B 3 , by strong Markov property at time T B 4 (and then at time Taking the maximum over y ∈ ∂ i B 3 on the left-hand side of (3.23) and rearranging, we find in view of (3.10): .
Then we notice that, since f is constant on K δ N ⊇ B 4 , see (2.5) and (3.4), We now have the following upper bound on the left-hand side of (3.6) under P e B 3 : On the other hand, by (1.19) of Lemma 1.1, we have a lower bound on the right-hand side of (3.6): Combining (3.26), (3.27) and Lemma 3.2, we find With the help of (1.19) and (3.10) we see that Proposition 3.1 readily follows.
We now turn to the domination of the equilibrium measures at a smaller scale on B 1 . In the proof of Proposition 3.4, thanks to the domination of capacities proved in Proposition 3.1, we are able to reduce the domination of equilibrium measures to the domination of (relative) entrance measures. This is performed in Lemma 3.5.
Proposition 3.4. When N is large, for all x ∈ Γ N and z ∈ ∂ i B 1 , The proof of Proposition 3.4 relies on the following lemma, where we prove that the killed entrance measure of B 1 almost dominates the corresponding standard entrance measure. From now on, we fix ǫ ′ = ǫ/(4u * * + 2ǫ). We recall (1.17) for notation.
Lemma 3.5. For sufficiently large N , for all x ∈ Γ N and z ∈ ∂ i B 1 , The proof of Lemma 3.5 has the same flavour as Section 3 of [2] and indeed relies on Lemma 3.3 of the same reference.
Proof. We decompose h B 1, B 4 (y, z) according to the time and place of the last step before entering B 1 at z, and obtain for y outside B 1 and z in B 1 Similarly, we have for y outside B 1 and z in B 1 , Therefore, to prove (3.30), it suffices to show that for large N and for all y, y ∈ ∂ i B 3 and z ′ ∈ ∂B 1 By an argument similar to Lemma 3.3 of [2] to B 4 and B 1 , we have that By analogous arguments we also obtain By the definition of r 1 and r 3 (see (3.1)), N 2r 1 −r 3 ≪ 1. Therefore, combining (3.37), (3.38) together with the fact that the claim (3.33) will follow once we show (see above Lemma 3.5 for our choice of ǫ ′ ) that when N is sufficiently large, for all x ∈ Γ N , all y, y ∈ ∂ i B 3 and all z ′ ∈ ∂B 1 , By (1.7) and (1.10), for large N , setting B = B(y, N r 4 2 ) we have the following bounds: Hence, we obtain (3.40) and (3.33) follows. This proves Lemma 3.5.
We are now ready to prove Proposition 3.4. In the proof, we make use of the sweeping identity, and, in effect, reduce the comparison of the standard and tilted equilibrium measures of B 1 to the comparison on the standard and tilted capacities of B 3 , and to the comparison of the (killed) entrance measures.
Proof of Proposition 3.4. For large N and for all x ∈ Γ N and z ∈ ∂ i B 1 , we find that Since up to the exit time from B 4 the tilted and standard walk have the same law (see (2.5)), we see that for y ∈ ∂ i B 3 and z ∈ ∂B 1 , we have Taking Lemma 3.5 into account, we find that for large N and for all x ∈ Γ N and z ∈ ∂B 1 , Thus, coming back to (3.43), we find that with our choice of ǫ ′ (above Lemma 3.5), This completes the proof of Proposition 3.4.

Coupling and Disconnection
In this section, we prove in Theorem 4.3 that the tilted interlacements disconnect K N from infinity with a probability, which tends to 1 as N goes to infinity. To this end, we show that in mesoscopic boxes with centers in Γ N (introduced above (3.1)), the tilted random interlacements locally "dominate" random interlacements with level higher than u * * , and thus typically disconnect in each such box the center from its boundary with very high probability. Therefore, there is a high probability as well for the tilted interlacement to disconnect the macroscopic body from infinity. The main step is Proposition 4.1 where we construct at each point of Γ N a coupling so that the tilted random interlacements with high probability locally dominate some standard random interlacements with level higher than u * * .
We recall the definitions of B 1 and B 2 from (3.2).
Proposition 4.1. When N is large, for all x ∈ Γ N , there exists a probability space (Ω,Ā,Q) and random sets I and I 1 defined onΩ, with same respective laws as I u ∩ B 1 under P N and I u * * + ǫ 8 under P u * * + ǫ 8 , so that (the constants depend on r 1 , r 2 , ǫ).
The idea of the proof is to stochastically dominate the trace in B 1 of random interlacements with level higher than u * * by the "first excursions" (from some inner boundary of B 1 to ∂B 2 ) of the trajectories from some random interlacements with slightly higher intensity, and then, further dominate these excursions by the same kind of "first excursions" of trajectories of the tilted interlacement. The following proposition for the above mentioned first stochastic domination in essence rephrases Proposition 4.4 of [1]. We begin with some notation.
For A ⊂ B ⊂⊂ Z d , we write k A,B for the law on Γ(Z d ) (see below (1.4)) of the stopped process X ·∧T B under P e A . We also denote the trace of a point process η = i δ w i on the space Γ(Z d ) by (4.2) I(η) = ∪ i Range(w i ).
Proposition 4.2. When N is large, for all x ∈ Γ N , there exists a probability space (Σ, B, Q) endowed with a Poisson point process η, with intensity measure (u * * + ǫ/4)k B 1 ,B 2 , and a random set I 1 ⊂ Z d with the law of I u * * + ǫ 8 ∩ B 1 under P u * * + ǫ 8 , and We refer the readers to Proposition 5.4 of [1] and to Section 8 of [1] for the proof of Proposition 4.2.
We now construct another coupling such that the trace on B 1 of the first excursions of the tilted random interlacements dominate the trace of the corresponding excursions for random interlacements at level u * * + ǫ 4 . Combined with Proposition 4.2, this will complete the proof of Proposition 4.1.
Proof of Proposition 4.1. We keep the notation of Proposition 4.2. Let α be the measure on ∂ i B 1 such that for all z ∈ ∂ i B 1 , (4.4) α(z) = u e B 1 (z) − u * * + ǫ 4 e B 1 (z).
By Proposition 3.4 α is a positive measure. Hence, we can construct an auxiliary probability space ( Ω, A, Q), endowed with a Poisson point process η on Γ(Z d ) with intensity measure k α (·) = P α (X ·∧T B 2 ). Since for all z in ∂ i B 1 , the tilted walk coincides with the simple random walk up to the exit from B 2 , we obtain that (4.5) I = (I( η) ∪ I(η)) ∩ B 1 is stochastically dominated by I u ∩ B 1 under P N .
We can thus construct on some extension (Ω, A, Q) an I distributed as I u ∩B 1 under P N , so that I ⊇ I(η), Q-a.s.. We then have  We are now ready to derive a key step for the proof of Theorem 0.1. Namely, we will now show that with P N -probability tending to 1, the event A N (= {K N V u ∞}, see (0.2)) does occur.  Proof. Note that for large N , when K N is connected to infinity by a nearest-neighbor path, this path must go through the set Γ N at some point x (see above (3.1)). Hence, this path connects x to the inner boundary of B x 1 , so that Thus, we find that for large N (4.9) By Proposition 4.1, for large N , uniformly in x ∈ Γ N , we can bound the probability in the right-hand side of (4.9) as follows, where the constants depend on r 1 , r 2 , ǫ.
Hence, we see that for large N , This concludes the proof of Theorem 4.3.

Denouement
In this section we combine the various ingredients, namely Theorem 4.3, Propositions 2.3 and 2.4, and prove Theorem 0.1.
Proof of Theorem 0.1. We recall the entropy inequality (see (1.61)), and apply it to P u and P N defined in Sections 1 and 2. By Theorem 4.3, we know that lim N →∞ P N [A N ] = 1, and (1.61) yields that By Proposition 2.3, we represent the right-hand side of (5.1) as Then, by Proposition 2.4, taking consecutively the limits η → 0, r U → ∞, and δ → 0, and we obtain Finally, by taking ǫ → 0 we obtain (0.3) as desired.
1) It is an important question whether Theorem 0.1 can be complemented by a matching asymptotic upper bound, say when K is a smooth compact set. In view of Theorems 6.2 and 6.4 of [13] (see also Remark 6.5 2) of [13]), this would indicate that the large deviations of the occupation-time profile of random interlacements, insulating K by values u ′ of the local field (with u ′ corresponding to a non-percolative behaviour of V u ′ ) capture the main mechanism underlying the disconnection of a macroscopic body, in the percolative regime of the vacant set.
2) As u → 0, the right-hand side of (0.3) tends to the finite limit − u * * d cap(K). One may wonder whether this limiting procedure retains any pertinence for the study of the disconnection of the macroscopic body K N by a simple random walk trajectory? For instance, does one have (5.4) lim inf