Random walks in random hypergeometric environment

We consider one-dependent random walks on Z in random hypergeometric environment for d ≥ 3. These are memory-one walks in a large class of environments parameterized by positive weights on directed edges and on pairs of directed edges which includes the class of Dirichlet environments as a special case. We show that the walk is a.s. transient for any choice of the parameters, and moreover that the return time has some finite positive moment. We then give a characterization for the existence of an invariant measure for the process from the point of view of the walker which is absolutely continuous with respect to the initial distribution on the environment in terms of a function κ of the initial weights. These results generalize [Sab11] and [Sab13] on random walks in Dirichlet environment. It turns out that κ coincides with the corresponding parameter in the Dirichlet case, and so in particular the existence of such invariant measures is independent of the weights on pairs of directed edges, and determined solely by the weights on directed edges.


Introduction
Despite important progress in the ballistic, balanced, or perturbative regimes (see in particular [SZ99, Szn00, Szn02, SZ06, BZ07, BDR14, RAS09, BZ08, Law82, GZ12, BD14]), random walks in i.i.d. random environment in dimension d ≥ 2 remain a very challenging model. The high non-reversibility of this model is at the heart of the difficulty and several of the basic questions concerning recurrence/transience, equivalence between directional transience and ballisticity, and diffusive behavior are still unsolved. The process viewed from the particle, which is a key tool for reversible models, is still only understood under specific conditions (see [Sab13,RA03,BCR16]).
The special case of random walks in random Dirichlet environment (RWDE), [ES06], where the environment is i.i.d. at each site and distributed according to a Dirichlet law, shows remarkable simplifications, while keeping the main phenomenological behavior as the general model (see [ST17] for a survey). For this special choice of distribution, a key property of "statistical invariance by time reversing" makes it possible to prove transience in dimension d ≥ 3 [Sab11], existence of an invariant measure viewed from the particle absolutely continuous with respect to the static law, and equivalence between directional transience and ballisticity in dimension d ≥ 3 [ST11, Sab13,Bou13,ST17].
The aim of this paper is to give a generalization of this model and of these results to a class of one-dependent random walks in random environment, based on some hypergeometric distributions.
The hypergeometric functions defined in (2) below are a natural special functions constructed from the Dirichlet distributions. A generalization of the statistical time-reversal key property is proved (see Corollary 4.3 below), based on a duality property of these hypergeometric functions. The latter is a multidimensional generalization of the fact that 2 F 1 (a, b; c, z) = 2 F 1 (b, a; c, z) where 2 F 1 is the basic hypergeometric series (see e.g. [AKKI11], Section 1.2.1 for the definition and Section 1.3.1 for the integral representation).
This generalization is natural from the following considerations. The statistical time-reversal property mentioned above makes it possible to write a rather efficient proof of transience and existence of an absolutely continuous invariant measure viewed from the particle in dimension d ≥ 3, but it fails to give information on some other natural questions on random walks in random Dirichlet environment (RWDE), such as large deviation and Sznitman's (T ) condition. Nevertheless, in dimension 1 in the Dirichlet case, the large deviation rate function can be explicitly computed and involves some hypergeometric functions (see [ST17], section 8). The meaning of this computation remains still rather mysterious and the model investigated in this paper comes from an attempt to generalize the computation done in [ST17]. Besides, it is also natural to ask to what extent the strategy used for Dirichlet environments can be generalized. We believe that the class of Dirichlet environments is the only class of i.i.d. environments on which the random walk satisfies the statistical time-reversal property mentioned above. This paper shows nevertheless that a larger class of environments for one-dependent random walks share the same basic features as the Dirichlet environments.
2 Statement of the results

Hypergeometric functions
Denote by ∆ (n) := u ∈ (0, 1] n : where as parameters we take vectors α ∈ (R * + ) n and β ∈ (R * + ) l that satisfy i α i = j β j and have strictly positive coordinates, and l × n matrix Z = (Z j,i ) with strictly positive coefficients, where here and after we use the notation R * + = {t ∈ R : t > 0}. Call functions of the following form hypergeometric functions: ( Here the integral is computed according to the Lebesgue measure on the simplex du = du 1 · · · du n−1 so that u n = 1 − n−1 i=1 u i . When (Z j,i ) has strictly positive coefficients, we have for all (Z · u) j ≥ z, with z = min i,j (Z j,i ), so that the integral (2) is finite. These functions are classical generalized hypergeometric functions, see e.g. [AKKI11, Section 3.7.4.].

The model on Z d
We denote by (e 1 , . . . , e d ) the canonical base of R d , and we set e d+i = −e i for i = 1, . . . , d.
Consider the lattice Z d endowed with its natural directed graph structure: Concretely, K is the set of couples of succeeding edges that can be crossed by a random walker on the graph G Z d . The space Ω K ⊂ (0, 1] K of random environments on H Z d is the subspace of transition probabilities of nearest neighbor chains on H Z d : ω e,e ′ = 1 .
The space Ω K also naturally describes the space of one-dependent Markov chain kernels on the graph Z d .
Let us now define the random environment. Fix some positive parameters (α 1 , . . . , α 2d ) and a 2d × 2d matrix Z = (Z i,j ) with strictly positive coefficients. The vectors (u (x,x+ei) ) i=1,...,2d , x ∈ V , are chosen randomly and independently according to the same distribution on the simplex ∆ (2d) with density This defines a product law on (u (x,x+ei) ) x∈Z d , i=1,...,2d which is denote by P (α,Z) . Denote by E (α,Z) the corresponding expectation. We now define a random environment on K Z d by first sampling (u (x,x+ei) ) x∈Z d , i=1,...,2d according to the last product law and the letting Naturally, ω defines the transition probabilities of a Markov chain on the arc graph H Z d , i.e. w ∈ Ω K , and the distribution P (α,Z) induces a probability distribution on the set of environments Ω K .
For an environment ω we denote by P e,ω the law of the Markov chain (X n ) n∈N on state space E started at e ∈ E with step distribution ω. Whenever ω is sampled according to P (α,Z) , we say that the last Markov chain is distributed according to the quenched law. Denote by P (α,Z) e the marginal of the joint law of the Markov chain started at e and the environment distributed according to P (α,Z) . The latter is also called the averaged law, or the annealed law, of the walk X, and it is characterized by Remark that from (3), whenever Z i,j = Z i,1 for all i, j = 1, . . . , 2d, then we have ω (x−ei,x),(x,x+ej) = u x,x+ej . Therefore, it defines a Markov chain on the original graph G Z d , and moreover (u x,x+ei ) i=1,...,2d are independent and follow a Dirichlet distribution with parameters (α 1 , . . . , α 2d ) at each site. Hence, it corresponds to RWDE mentioned in the introduction (for an overview on RWDE see [ST17]).

Order of Green function and Transience on
Fix parameters (α i ) i=1,...,2d and (Z i,j ) i,j=1,...,2d as in Section 2.2 and let ω be distributed according to P (α,Z) . Denote by G ω (e 0 , e 0 ) the Green function at (e 0 , e 0 ) of the Markov chain with jump probabilities ω, that is, the P e0,ω -expected number of returns to e 0 .
Remark 2.2. A similar statement was proved in [Sab11, Theorem 1] in the Dirichlet case for s < κ, (an interpretation of the parameter κ is given at the end of Section 2.4). Hence, the last theorem generalizes this to the hypergeometric environment in the case s <κ < κ. The statement would certainly be also true in the casẽ κ ≤ s < κ: to prove it in this regime, one would need to consider a max-flow type problem adapted to the arc graph H, as in Section 7.2. of [ST17] together with our proof of Theorem 2.4. We don't include that analysis in the current paper, but we stress that it could be done using the same techniques.
Remark 2.3. As in the standard Dirichlet case, the case of dimension 2 is still mysterious. It is expected that the walk is recurrent when the weights are symmetric with respect to the axis (i.e. null expected drift at first step), hence the the Green function is a.s. infinite. When the weights are not symmetric, we would expect that there is no long range trapping effect in d = 2 so that the integrability condition would be the same as in d ≥ 3. But it is still far from being understood. In dimension d = 1, it would be possible to adapt the proof of the Diriclet case (see [ST17] page 502) to compute the law of the probability starting from the edge (0, 1) to never come back to the edge (0, 1). It would give that the Green function is integrable for s < |α − β| when α (resp. β) are the weights of the right direction edge (resp. left direction edge). The integrability should not depend on the Z parameters. When α = β the walk should be recurrent.

Invariant measure for the walker point of view
Let (τ x ) x∈Z d be the shift maps on Ω K , where τ x (ω(e, e ′ )) := ω(x+e, x+e ′ ). Here x+e := (x+e, x+e ′ ) for x ∈ Z d and e = (e, e) ∈ E Z d . We also let τ e := τ e . Following the strategy of [Koz85] and [KV86], we define the process ω n := τ Xn (ω 0 ) on Ω K from the point of view of the walker with initial state ω 0 ∼ P. Under P e0 , this is a Markov process on Ω K . Its infinitesimal generator R is given by The main result of this section is the following generalization of Theorem 1 of [Sab13].
The parameter κ was considered first in [Sab11] in the context of Z d , and was introduced by Tournier [Tou09] for finite graphs. Let us give an interpretation of this parameter. If S ⊂ V is a nonempty set of vertices, the outer boundary of S is defined by Define also α(∂ + (S)) = e∈∂+(S) α e , the total α-strength of the edges leaving S. Then represents the maximal weight of the outer boundary of a single edge. Roughly speaking, it means that the strongest traps in this model are the traps consisting of a single edge, and the strength of these traps is the outer weight. This last assertion is justified by the following lemma.
Proof. Using (4) and the independence of the u e between vertices, and noticing that under P e0,ω , T i is a geometric random variable with expectation Remark 2.6. We believe that the statement of the last lemma can be strengthened to say that E Since the proof should be somewhat involved, and since we shall use only the weak form of the lemma (namely an implication in the case s = 1), this is not done in the current paper.

General graphs
It is necessary for the proof to define our random environments on general graphs. This is done in Section 3.1 and 3.2 below.

Directed arc graph
Remember that a directed graph is connected if for any two vertices x and y there is a directed path connecting x to y, or connecting y to x. Let G = (V, E) be a connected directed graph with vertices and edges such that the in-degrees and out-degrees are finite at each vertex. Here and after in-degree (out-degree) of a vertex x ∈ V is the number of vertices y ∈ V that (y, x) ∈ E (respectively, (x, y) ∈ E). For each edge e we denote by e and e the tail and head of the edge so that e = (e, e), and we denote byě = (e, e) the "reversed edge". We denote byǦ = (V,Ě) the reversed graph with edge setĚ := {ě, e ∈ E}.
We define the (directed and connected) arc graph H = (E, K) with nodes E and arcs K by setting K := {k = (e, e ′ ) ∈ E 2 : e = e ′ }. In words, H is the graph so that its nodes are the edges of G and its arcs are directed pairs of edges of G that share a common vertex, the head of the first edge and the tail of the second one. Define the reversed graphȞ = (Ě,Ǩ) by the relation (ě ′ ,ě) ∈Ǩ ↔ (e, e ′ ) ∈ K. Clearly,Ȟ is also the arc graph of the reversed graphǦ.
The space Ω K will be the space of environments of Markov chains on the directed graph H. The space ΩǨ is defined similarly for the reversed graphȞ = (Ě,Ǩ). As in Section 2.2, we note that Ω K also describes the one-dependent Markov chains on the graph G.

The model on a general directed arc graph
Let G = (V, E) be a directed connected graph, and let H be the corresponding arc graph. Fix strictly positive parameters (α e ) e∈E and (Z e,e ′ ) (e,e ′ )∈K . Recall the definition of ϕ and Φ in Section 2.1. For every x ∈ V , let be defined for u in the deg(x)-simplex Here deg(x) is the out-degree of x. Similarly we let, as in (2), where d x u = e=x,e =ex du e is the measure on ∆ (x) defined in Section 2.1, where e x is an arbitrary choice of edge exiting x (obviously, du does not depend on the choice of e x ). Let U (x), x ∈ V , be random vectors with values in ∆ (x) , which are independent and distributed according to the density For every e ∈ E let u e := U e (e), the e coordinate of the random vector U (e). We denote by P (α,Z) the distribution on (u e ) e∈E defined in this way. Denote by E (α,Z) the corresponding expectation.
From the random variables u e , e ∈ E, we construct an environment ω ∈ Ω K by With a slight abuse of notation, we also denote by P (α,Z) the law thus induced on Ω K . For ω ∈ Ω K we denote by P e,ω the law of the Markov chain X on E started at e ∈ E with step distribution ω.
Whenever ω is sampled according to P (α,Z) , the law of the last Markov chain is called the quenched law. Denote by P (α,Z) e the marginal law of the joint law of the Markov chain started at e and the environment distributed according to P (α,Z) . The latter is also called the averaged law, or annealed law of the walk X, and is characterized by Note that, as in the case of Z d , if (Z e,e ′ ) e=x=e ′ , x ∈ V , are matrices with constant rows (i.e. Z e,e ′ = c e for every (e, e ′ ) ∈ K), then U (x) has the Dirichlet((α e ) e=x ) distribution. Hence ω is an i.i.d Dirichlet((α e ) e=x ) environment, and the walk is a standard random walk in Dirichlet environment.
The model defined in Section 2.2 on Z d obviously corresponds to the case where the parameters (α e ) e∈E and (Z e,e ′ ) (e,e ′ )∈K are given by with notation as in Section 2.2. We warn the reader about the little confusion of notation between (α i ) and (α e ) and (Z i,j ) and (Z e,e ′ ) (e,e ′ )∈K but we think it will be clear enough from the context. Obviously, the model of Section 2.2 describes all the parameters on H Z d which are invariant by translation, i.e. which satisfy α e = α x+e , for all x ∈ Z d , e ∈ E and Z e,e ′ = Z x+e,x+e ′ , for all x ∈ Z d and (e, e ′ ) ∈ K.

A remark on our motivation
The origin of this work comes from the following fact proved in [ST17, Section 8.3]. In dimension 1 the rate function of the annealed large deviation principal for the hitting time of a level k is computed in terms of the hypergeometric function 2 F 1 . The proof is based on the identification of the law of a the solution of a distributional equation, inspired by Chamayou and Letac, [CL91]. The symmetry property of 2 F 1 , which is a special case of the duality property proved in Appendix A, is at the core of the argument. In the one-dimensional case, this identity generalizes the statistical time-reversal property. An very interesting problem, which is still open, is to find a multidimensional counterpart for the rate function formula.
Another motivation is to find other models that share the same type of statistical time-reversal property with Dirichlet environments. We believe that Dirichlet environments are the only nontrivial model based on independent transition probabilities at each site that have this property. The model presented here is a natural extension of the Dirichlet environment that allows one-dependence of the quenched Markov chain and that shares similar property.

Marginal and multiplicative moments
We assume in this chapter that the graph G is finite. Our first observation regarding the hypergeometric distribution is the distribution of its marginal. A direct computation gives that if ω is defined as in (8), then we have for e, e ′ so that In particular we see that the above is finite whenever the arguments of Φ x is strictly positive, and in particular as long as s > − min{α e , α e ′ }. Note that in the Dirichlet case, e.g. whenever Z ≡ 1, we have that ω(e, e ′ ) = u e ′ has the Beta distribution Beta(α e ′ , e=x α e − α e ′ ).
Next, we shall expand the definition of the measure P (α,Z) on environments to include a possibility to increase or decrease the weights α and Z.
Assume here that G is finite. For a function ξ : K → R let be the total 'weight' leaving e, and entering e ′ , respectively. We now define the measure P (α,ξ,Z) on Ω K by a similar procedure. For every and similarly This is well-defined as long as α e + ξ e > 0 and α e + ξ e > 0 for all e ∈ E. Next, U x , x ∈ V , are taken to be independent with density Putting u e := U e (e), e ∈ E, and constructing ω ∈ Ω K as in (8), we denote its quenched and annealed laws by P e0,ω and P (α,ξ,Z) e0 . Note that in the case ξ ≡ 0 we have P Also, for functions β, γ : A → R + so that A is a finite set and β is strictly positive, we define A direct computation gives that for every ξ, Θ : as long as the right hand side of the equation is well defined. If we think of P (α,Z) as the law of (u e ) e∈E , i.e. a measure on x∈V ∆ (x) , then the Radon-Nikodym derivative one gets by changing the values of α is explicit. Indeed, for θ : E → R + so that α e > θ e for all e ∈ E, and for any random variable Y (ω) = (Y • ω)(u) whereũ e := u e e ′ =e Z e,e ′ u e ′ .

Duality formula
A key feature of the hypergeometric functions defined in (2) is the following duality formula [AKKI11, Page 169], which has consequences regarding time-reversing. This will be discussed in Chapter 4.3, and a direct proof of Lemma 4.1 will be supplied in Appendix A. Define where Γ is the standard Gamma function, i.e. Γ(t) = ∞ 0 x t−1 e −x dx. Lemma 4.1 (Duality formula). With the notation from (2), the following holds as soon as where Z t is the transposed matrix corresponds to Z.
Letπω be the invariant probability measure of the Markov chain onĚ with transition probabilitieš ω. Then, sinceπω is also the invariant probability measure of the time reversed chain defined by ω, we haveπω for every e ∈ E. Note thatω is an element of ΩǨ. Letαě := α e for every e ∈ E. Also, denoteŽ the 'reversed' matrices corresponds to Z, that isŽě′ ,ě = (Z t ) e ′ ,e = Z e,e ′ . Let C = {e 0 , e 1 , ..., e n = e 0 } be a cycle in H, n = n(C) is its length. (The reader should notice that here C is a cycle of edges, and so viewed as a sequence of vertices it has the form {e 0 , e 1 , e 2 , ..., e n = e 0 , e 0 = e 1 }, i.e., a cycle of vertices plus a repetition of the vertex e 1 .) DefineČ := {ě n ,ě n−1 , ...,ě 0 =ě n } to be the corresponding reversed cycle inȞ. For a finite collection C of cycles we denote byČ := {Č : C ∈ C}. Set ω C := n−1 k=0 ω e k ,e k+1 , and ω C := C∈C ω C . By (15), we have ω C =ωČ, for all cycles C. Similarly, we set Z C := n−1 k=0 Z e k ,e k+1 and Z C := C∈C Z C . We have, by definition ofŽ, that Z C =ŽČ for all cycle C.
We introduce now the divergence operator on the graph G: we define div : Lemma 4.2. Assume div(α) = 0. The following hold for all finite collections of cycles C, Proof. Denote by N e = N e (C) the number of 0 ≤ k ≤ n − 1, so that e = e k , where e k ∈ C, for some C ∈ C of length n = n(C). We denote similarlyŇ =Ň (C) the corresponding counting function for the collection of reversed cycles. Clearly, N e =Ňě.
A direct computation gives Indeed, from the definition of the environment ω, see (8), we have the term Z C coming from the term Z e ′ ,e in (8), the second term coming from the times when the cycle enters e ′ , the last term coming from the times when the cycle leaves e. Combined, with the definitions (1), (6), (7), (10), it gives (17). Next, since div(α) = 0, the Duality formula Lemma 4.1 says that for all x ∈ V , It implies that, where, G(α) := x∈V B((α e ) e=x ).
Since C is a collection of cycles, it implies that div(N ) = 0, the same applies for α + N and we get F (α + N, Z) = F (α +Ň ,Ž). From (17) and since Z C =ŽČ, we deduce Proof.
Since ω(e, e ′ ) and π ω (e) are positive and bounded by 1, and E and K are finite, the law of P is determined by its moments. That is, it's enough (and actually equivalent) to show that for any η : Note that since the graph is finite and all ω(e, e ′ ) ∈ (0, 1), under the quenched law the Markov chain and its time reversal are both recurrent. But now notice that the law of the recurrent Markov chainω is determined by the law of its cycles. Indeed, for all (e, e ′ ) ∈ K,ω(ě ′ ,ě) = C∈C e,e ′ωČ , where C e,e ′ is the family of all cycles C starting at e, going immediately to e ′ and returning to e for the first time. It clearly implies that if η = (η e,e ′ ) (e,e ′ )∈Ǩ is a positive vector, thenω η can be written as a sum with positive coefficients of terms of the typě ωČ, where C are finite collections of cycles. Using Lemma 4.2, it implies that We finish with an application from the proof of the last corollary. Set H e := inf{n ≥ 0, X n = e} and H + e0 := inf{n > 0, X n = e 0 }. Proof. As in the last corollary, it follows from the fact that the weights are strictly positive P (α,Z)a.s., that the Markov chains on the finite graphs H,Ȟ are recurrent. Hence the probability Pě 0 ,ω [X 1 =ě] equals to the sum of theω weight over of all cycles {ẽ 1 , ...,ẽ n } withẽ 1 =ẽ n =ě 0 but e i =ě 0 for 1 < i < n, andẽ 2 =ě. To end one notices that the sum of ω weight over the reversed cycles gives exactly P e0,ω [X H + e 0 −1 = e], and by Lemma 4.2 these probabilities are equal.

Arc graph identities
We now use the same notation for the divergence operator on G also for the arc graph H. div : for Θ : K → R and e ∈ E. We also denote byΘ :Ǩ → R + the function so thatΘ((ě ′ ,ě)) = Θ((e, e ′ )). With a minor abuse of notation the divergent is analogous defined as div : RǨ → RĚ. This gives div(Θ)(e) = −div(Θ)(ě) for every Θ : K → R + and e ∈ E.
Θ : K → R + is a total flow from e 0 of strength γ if it has the form Lemma 4.6. If Θ : K → R + is a total flow from e 0 of strength γ, then Proof. First note that Hence,

Construction of good flows
Consider first the lattice Lemma 4.7 (Min-cut total flow on H). Let d ≥ 3. Assume that (c(e)) e∈E Z d is uniformly bounded, i.e. there exist some constants 0 < C 1 < C 2 < ∞ such that C 1 ≤ c(e) ≤ C 2 for all edge e. Fix e 0 to be an edge with e 0 = 0. There is a constant c 2 so that for every large enough N there is a non-negative function Θ = Θ N on K N with the following properties: 1. Θ e ≤ c(e) + m(c)1 e=e0 (almost below the capacity).
where m(c) is the min cut of the network c.
For the proof we shall use the analogous Assume that (c(e)) e∈E Z d is uniformly bounded, i.e. there exist some constants 0 < C 1 < C 2 < ∞ such that C 1 ≤ c(e) ≤ C 2 for all edge e. There is a constant c 1 so that for every large enough N there is a non-negative function θ = θ N on E N with the following properties: 1. θ(e) ≤ c(e) (below the capacity).
where m(c) is the min cut of the network c.
Proof of Lemma 4.7. Fix N ≥ 2 and let θ be according to Lemma 4.8. Write simply m = m(c).

The Green function has a positive moment
In this section we prove Theorem 2.1. The proof follows closely the ones in [Sab11] and in [ST17, Section 7. The weights α and Z on Z d naturally yield weights on E N . We endow the special edge (∂, x 0 ) with weight α (∂,x0) = γ, for some γ > 0 that will be defined later on, and set Z e,e ′ = 1 whenever e = ∂. Set also Z ((∂,x0),e0) = 1. With this choice we note that onÊ N div(α) = γ(δ ∂ − δ 0 ).
Fix an initial edge e 0 ∈ E d so that e 0 = 0. For N ≥ 2, define f N : and Q (25) Using the Lemma, the proof is standard (see the paragraph after Lemma 1 in Sabot [Sab13], including the references therein). For convenience we shall give a sketch here. Consider Q Since Ω is compact, then so does the space of product probability measures, and there is an increasing sequence of positive integers and a probability measure so that Q Since the generator is weakly Feller (i.e. continuous with respect to the weak topology), it follows that the weak limit probability measure Q (α,Z) is invariant for the process viewed from the point of view of the walker on Ω. For every continuous bounded function g on Ω, and every 1 < p < κ we have gdQ (α,Z) ≤ c p g Lq(P (α,Z) ) , where 1 p + 1 q = 1 (see equation (2.14) in [BS12]). The last inequality shows that Q (α,Z) is absolutely continuous with respect to P (α,Z) , and for f = dQ (α,Z) dP (α,Z) we have f Lp(P (α,Z) ) ≤ c p .
Uniformly bounding the Radon-Nikodym derivatives on the torus In this section we prove Lemma 6.1. Let p ∈ [1, κ). Combining Lemma 4.5 and Lemma 4.6, if Remember that the root edge e 0 was chosen such that e 0 = 0. In the sequel we will often simply write e i for the directed edge (0, e i ) (remember that e 1 , . . . , e d is the base of R d ). Now, by Hölder Therefore, Hence, from (27), Lemma 6.1 follows once we show that for every 1 ≤ i ≤ 2d and N ∈ N there is Θ N : K N → R + satisfying (26), so that We shall now prove (28). Let α (i) , 1 ≤ i ≤ 2d, be the weights defined by α (i) := α + κ1 e=ei . I.e. α (i) gives α an extra κ on the specific edge e i but leaves it unchanged on all other edges. Then, where m(c) is the min cut of the network (c(e)) e∈E(Z d ) on Z d (that is, the minimal c-weight of a set separating 0 from ∞), see equation (3.10) and the paragraph below it in [Sab13] for the proof. We shall now show (28) for the case i = 1. The other 2d − 1 possibilities are symmetric. Fix N ≥ 1, and apply Lemma 4.7 with c(e) = α (1) (e) to getΘ =Θ N with bounded L 2 norm, almost below the capacityΘ so that it is a total flow from e 0 with strength m(α (1) ) dN d . Set Then Θ is also total flow from e 0 with a bounded L 2 norm and with strength p dN d . Remember the notation β γ from (11). Fix q = q(α, d) > 0 to be chosen later-on. Let r > 0 be so that 1 r + 1 q = 1. Using Hölder inequality, the Weak Reversibility Corollary 4.3, we have Assume for the moment that the functions F and G below are well defined. This will be justified by a suitable choice of q. Using (12) we get: Using the Duality Lemma 4.1 for the term with power 1/r, together with the fact thatαě = α e andΘě′ = Θ e ′ , the last product equals Choice of q: The terms in the products above will be well-defined if all the terms evaluated by F and G are strictly positive. Let us see what should q > 0 satisfy to achieve that. First note that the terms with power 1/r are strictly positive since so is α, whereas Θ is non-negative. For the terms with power 1/q to be strictly positive, we need to have and α e − qΘ e + qκ1 e=e1 > 0.

Appendix A Duality of hypergeometric functions
In this section we give a direct proof for Lemma 4.1 on the duality relation for hypergeometric functions. Note that for every t, β > 0 Recall (1) and (2). The strategy is to first use (35) to construct a variable v that will take a dual role of u and then to add another variable to "free the variable u from the simplex". The next step is to modify u and v to make the integral suitable for duality. The conclusion is by following the above steps in a reverse order with the new v and u. Here is the calculation in detail followed by some clarifications.
The third equality follows from (35). For the fifth equality, note that using the change of variables λ = i w i , u i = 1 λ w i , we have that To see the sixth equality, make a change of variables u →ũ = i vj i ui u and v →ṽ = i ui j vj v, and deduce the equality from the fact that i α i = j β j . The one before last equality follows from the previous equalities by interchanging the roles of (α, u, n, Z) and (β, v, l, Z t ). The last equality follows from the definition of Φ. This gives the desired duality.