A Resolvent Approach to Metastability

We provide a necessary and sufficient condition for the metastability of a Markov chain, expressed in terms of a property of the solutions of the resolvent equation. As an application of this result, we prove the metastability of reversible, critical zero-range processes starting from a configuration.


Introduction
Metastability is a physical phenomenon ubiquitous in first order phase transitions.A tentative of a precise description can be traced back, at least, to Maxwell [52].
In the mid-1980s, Cassandro, Galves, Olivieri and Vares [20], in the sequel of Lebowitz and Penrose [46], proposed a first rigorous method for deducing the metastable behavior of Markov processes, based on the theory of large deviations developed by Freidlin and Wentsel [23].This method, known as the pathwise approach to metastability, was successfully applied to many models in statistical mechanics [55].
In the following years, different approaches were put forward.In the beginning of the century, Bovier, Eckhoff, Gayrard and Klein [14,15,16], replaced the large deviations tools with potential theory to derive sharp estimates for the transition times between wells, the so-called Eyring-Kramers law.We refer to [17] for a comprehensive review of this method, known as the potential theoretic approach to metastability.
More recently, Beltrán and Landim [6,7] characterized the metastable behavior of a process as the convergence of the order process, a coarse-grained projection of the dynamics, to a Markov chain.Inspired by [15] and based on the martingale characterization of Markov processes, they provided different sets of sufficient conditions for metastability.We refer to [34] for a review of the martingale approach to metastability.
In this article, we show that the metastable behavior of a sequence of Markov chains can be read from a property of the solutions of the resolvent equation associated to the generator of the process.It turns out that this property is not only sufficient, but also necessary for metastability.This is the content of Theorem 2.3.
As these conditions for metastability do not rely on the explicit knowledge of the stationary state, they can, in principle, be employed to derive the metastable behavior of non-reversible dynamics whose stationary states are not known.
To emphasize the strength of our method, we show that the necessary and sufficient conditions for metastability can be derived from the ones introduced in [6,7], which have been proved to hold for all models whose metastable behavior has been derived through the potential theoretic method [15] or the martingale method [6,7].Moreover, the recent articles [47,36,37] successfully apply the approach introduced here to non-reversible overdamped Langevin dynamics.
We further illustrate the extent of possible applications by proving that the conditions for metastability required in this article hold for a dynamics with poor mixing properties: reversible condensing critical zero-range processes.This is a model which does not satisfy the conditions in [6], and whose metastable behavior could only be derived so far when the process starts from measures spread over a well [41].This new approach permits to extend this result to reversible dynamics in which the process starts from a configuration.
We leave for the future the investigation of metastability of critical asymmetric zero-range processes.For this model the mixing condition M, introduced in Subsection 6.2, is very delicate in that the mixing time is slightly smaller than the escape time.In the reversible situation considered here, we verify condition M through a careful construction of a sub-harmonic function.It seems difficult to extend this construction to the non-reversible case.Beyond condition M, all other steps are identical to the reversible case.
Recent advancements.Before providing a more detailed statement of the main results, we review recent progress in the theory of metastability.
Markov Chain Monte Carlo algorithms have been widely used in order to sample from a given Gibbs measure.Their efficiency is expressed by the speed of convergence to equilibrium.It has been shown in several different contexts that nonreversible dynamics converge faster to equilibrium than their reversible counter part.This is derived by Kaiser, Jack and Zimmer [30] for the large deviations from the hydrodynamic limit of interacting particle systems described by the Macroscopic Fluctuation Theory.Bouchet and Reygner [13] show that the transition time between two wells in overdamped Langevin dynamics is faster in the nonreversible case.A similar result appears in [44] for random walks in potential fields.
These results raise the problem of finding the non-reversible perturbation of a reversible dynamics that does not alter the invariant distribution and optimizes the rate of convergence.Lelièvre, Nier and Pavliotis [48] solve this problem for overdamped Langevin equations with quadratic potential.Guillin and Monmarché [28] show that the asymptotic rate of convergence of generalized Ornstein-Uhlenbeck processes is maximized by non-reversible hypoelliptic ones.
There are only a few other results on metastability for nonreversible dynamics.Le Peutrec and Michel [50] obtain by semiclassical analysis the Eyring-Kramers formula for the exponentially small eigenvalues of the generator of a nonreversible overdamped Langevin dynamics associated to a potential which is a Morse function satisfying additional regularity properties.
In the last years, the close connection between quasi-stationary states and exponential exit laws have been exploited in many different directions.Bianchi, Gaudillière and Milanesi [11,12] expressed the mean transition time in terms of soft capacities and derived sufficient conditions for metastability in terms of local and global mixing characteristics of the dynamics.Miclo [53] provided an estimate on the distance between the exit time of a set and an exponential law.Di Gesù, Lelièvre, Le Peutrec and Nectoux [26,49] investigated the distribution of the exit point from a domain.Berglund [9] reviews analytical methods to derive metastability.Di Gesú [25] derived, recently, the Eyring-Kramers formula for the exponentially small eigenvalues of the generator of reversible discrete diffusions with semiclassical analysis, an expansion obtained in [42,44] by stochastic methods.
We turn to a precise description of the results.
The model.Consider a sequence of countable sets H N , N ∈ N, and a collection of H N -valued, irreducible, continuous-time Markov chains (ξ N (t) : t ≥ 0).To fix ideas, one may think that the sets H N are finite with cardinality increasing to infinity, but this is not necessary.
Let S be a fixed finite set and Ψ N : H N → S a projection in the sense that the cardinality of S is much smaller than that of H N .Elements of H N are represented by Greek letters η, ξ, while the ones of S by x, y.The problem we address is under what conditions the order process (Y N (t) : t ≥ 0), defined by Y N (t) = Ψ N (ξ N (t)), is close to a Markovian dynamics which mimics the dynamics of ξ N (t).
Denote by E x N the inverse image of x ∈ S by Ψ N , E x N = Ψ −1 N (x), and by L N the generator of the Markov chain ξ N (t).The sets E x N are called wells.The following condition plays a central role in the article.
Resolvent condition.Fix a function g : S → R, and denote by G N : , where χ A , A ⊂ H N , stands for the indicator of the set A. For λ > 0, denote by F N the solution of the resolvent equation Assume that for each λ > 0, F N is asymptotically constant on each set E x N : there exists a function f : Of course, f depends on λ and on g.Assume, furthermore, that there exists a generator L of an S-valued continuoustime Markov chain such that for all λ > 0, g : S → R. We claim that, under the resolvent conditions (1.2), (1.3), any limit point of the sequence of processes Y N (•) = Ψ N (ξ N (•)) is the law of the continuous-time Markov chain whose generator is L. The proof of this claim is so simple that we present it below.It relies on the martingale characterization of Markovian dynamics.
Assume that the sequence Y N (•) converges in law.Fix λ > 0 and a function where o N (1) is a small error which vanishes uniformly as N → ∞.As g = (λ − L)f , Passing to the limit yields that any limit point solves the martingale problem associated to the generator L. To complete the argument it remains to recall the uniqueness of solutions of martingale problems in finite state spaces.
The resolvent condition is also necessary.The previous approach provides a general method to describe a complex system, a Markovian dynamics evolving in a large space H N , in terms of a much simpler one, an S-valued Markov chain.This abridgement has been named Markov chain model reduction or metastability, see [34] and references therein.
The point here is that the existence of this synthetic description of the dynamics can be read from a simple property of the generator.It is in force if the resolvent operator U λ,N := (λ − L N ) −1 sends functions which are constant on the sets E x N to functions which are asymptotically constant.
The second main point of the article is that conditions (1.2), (1.3) are not only sufficient for the convergence of the order process Y N (•), but also necessary.
Applications.The last claim of the article is that this method to derive the metastable behavior, in the sense of the model reduction described above, of a sequence of Markov processes can be applied to a wide range of dynamics.We support this assertion by providing sufficient conditions for assumptions (1.2), (1.3) to hold.These conditions rely on mixing properties of the dynamics and have been derived in several different contexts in previous papers.Furthermore, in the last part of the article, we show that these conditions are in force for reversible, critical zero-range dynamics.In particular, we are able to extend the results presented in [41] to the case in which the process starts from a configuration instead of a measure spread over a well E x N .

Comments.
In concrete examples, one has first to find the time-scale θ N at which a metastable behavior is observed.Then, one speed-up the evolution by this quantity and prove all properties of the dynamics in this new time-scale.Speeding-up the process by θ N corresponds to multiplying the generator by the time-scale θ N .In the previous discussion we started from a generator which has already been speeded-up, which means that the metastable behavior is observed in the time-scale θ N = 1.
This approach to metastability, inspired from techniques in PDE to study the asymptotic behavior of solutions of reaction-diffusion equations [21,59], appeared in the context of Markov processes in [54,45,56].In these articles, for different models, it is proved that the solutions of the Poisson equation L N F N = G N are asymptotically constant in each well.
Replacing the Poisson equation with resolvent equations has a significant advantage, as the solutions of the later equation are bounded.It permits, in particular, to prove L ∞ estimates instead of the L 2 estimates derived in [41].This, in turn, allows to start the process from a fixed configuration instead of a measure spread over the sets E x N .The existing methods to derive the metastable behavior of a Markov processes rely on explicit computations involving the stationary state [17,34].In contrast, as already pointed out at the beginning of this introduction, the deduction of (1.2) and (1.3) does not appeal to the stationary state.
Introducing a transition region.Condition (1.2) is expected to hold only in very special cases, where the jump rates between configurations belonging to different sets E x N vanish asymptotically.Only in such a case, one can hope for a discontinuity of the solution of the resolvent equation (1.1) at the boundary of the set E x N [an aftermath of condition (1.3)].
To surmount this problem, we introduce a transition set ∆ N which separates the wells E x N .The set ∆ N has to be sufficiently large to isolate the wells, but small enough to be irrelevant from the point of view of the dynamics.
In this new set-up, ∆ N , E N forms a partition of the state space H N , where To bypass the set ∆ N , we focus our attention on the trace of process ξ N (•) on E N and provide sufficient conditions for the projection of the trace process to converge to a Markovian dynamics.This result requires conditions (1.2), (1.3) to hold only on the set E N , as stated in the first equation.
Furthermore, in this framework, Theorem 2.3 asserts that the resolvent conditions (1.2), (1.3) hold if, and only if, (a) the order process converges to the S-valued Markov chain whose generator is L and (b) the process ξ N (•) spends only a negligible amount of time outside the wells E x N .Critical zero-range processes.As mentioned above, this approach is applied to a special class of zero-range processes.This Markovian dynamics describes the evolution of particles on a finite set S. Denote by N ≥ 1 the total number of particles and by η = (η x : x ∈ S) a configuration of particles.Here, η x represents the number of particles at site x for the configuration η.Let H N = {η ∈ N S : x∈S η x = N } be the state space.Particles jump on S according to some rates which will be specified in the next section.It has been shown that a condensation phenomenon occurs for this family of rates.The precise statement requires some notation.Fix a sequence ( N : N ≥ 1) such that N → ∞, N /N → 0. Denote by E x N , x ∈ S, the set of configurations given by For the models alluded to above, µ N (E x N ) → 1/|S|, where µ N represents the stationary state of the dynamics [22,29,27,8,3,4,1].
This means that under the stationary state, essentially all particles sit on a single site.In consequence, in terms of the dynamics, one expects the zero-range process to evolve as follows.When it reaches a set E x N , it remains there a very long time, performing short excursions in ∆ N .Its sojourn at E x N before it hits a new well E y N , y = x, is long enough for the process to equilibrate inside the well E x N .The transition from E x N to a new well E y N is abrupt in the sense that its duration is much shorter compared to the total time the process stayed in E x N .We apply the method presented at the beginning of this introduction to derive the asymptotic evolution of the position of the condensate (the site x where almost all particles sit) for critical, reversible zero-range dynamics.
The metastable behavior of condensing zero-range processes has a long history [8,35,2,58,54,41].The critical case, examined here and in [41], presents a major difference with respect to the super-critical case considered before.While in the super-critical case, when entering a well, the process visits all its configurations before visiting a new well, this is no longer true in the critical case.This difference prevents the use of the martingale approach, proposed in [6,7], to prove the metastable behavior of a sequence of Markov chains.
To overcome this problem, we show that in the critical case, when entering a well, the process hits the bottom of this well before reaching another well.The proof of this result relies on the super-harmonic functions constructed in [41] and on mixing properties of the process reflected at the boundary of the wells.The fact that the process visits one specific configuration inside the well permits to prove its metastable behavior starting from any configuration inside a well.
Adding together the property that the process hits quickly the bottom of a well and that it mixes inside the well before it reaches its boundary permits to prove that the solution of the resolvent equation fulfills (1.2).The proof of property (1.3) relies also on a computation of capacities between wells.Details are given in Sections 8-12.
To our knowledge, this is the first model which does not visit points and for which one can prove metastability starting from points and derive explicit formulae for the time-scale at which metastability occurs and for the generator L of the asymptotic dynamics.
Along the same lines, Schlichting and Slowik [57] extended the investigation of metastability to continuous-time Markov chains which do not hit single points.They derived asymptotic sharp estimates for mean hitting times by generalizing the potential theoretic approach to deal with metastable sets, instead of just metastable points.This technique has been applied by Bovier, den Hollander, Marello, Pulvirenti and Slowik [18] to inhomogeneous mean-field models.Directions for future research.As observed above, it is conceivable to derive properties (1.2), (1.3) without turning to the stationary state.In particular, this approach might permit to deduce the metastable behavior of non-reversible dynamics for which the stationary measure is not known explicitly (say, non-reversible diffusions [13]).Furthermore, proving properties (1.2) and (1.3) for a generator L N becomes an interesting problem since they yield (modulo a third property) the metastable behavior of the associated Markovian dynamics.

A Resolvent Approach to Metastability
In this section, we provide a set of sufficient conditions for a sequence of continuous-time Markov chains to exhibit a metastable behavior.If the framework below seems too abstract, the reader may read this section together with the next, where we apply these results to a concrete example, the critical zero-range process.
We start introducing the general framework proposed in [6,7] to describe the metastable behavior of a Markovian dynamics as a Markov chain model reduction.Let (H N : N ≥ 1) be a collection of finite sets.Elements of the set H N are designated by the letters η, ξ, and ζ.
Consider a sequence (ξ N (t) : t ≥ 0) of H N -valued, irreducible, continuous-time Markov chains, whose generator is represented by L N .Therefore, for every function f : where R N (η, ξ) stands for the jump rates.Denote by λ N (η) the holding times of the Markov chain, λ N (η) = ξ =η R N (η, ξ), and by µ N the unique stationary state.Denote by D(R + , H N ) the space of right-continuous functions x : R + → H N with left-limits, endowed with the Skorohod topology and its associated Borel σfield.Let P N η , η ∈ H N , be the probability measure on D(R + , H N ) induced by the process ξ N (•) starting from η ∈ H N .Expectation with respect to P N η is represented by E N η .Fix a finite set S, and denote by E x N , x ∈ S, a family of disjoint subsets of The sets E x N , x ∈ S, represent the metastable sets of the dynamics ξ N (•), in the sense that, as soon as the process ξ N (•) enters one of these sets, say E x N , it equilibrates in E x N before hitting a new set E y N , y = x.The goal of the theory is to describe the evolution between these sets.To this end, we introduce the order process.
For A ⊂ H N , denote by T A (t) the total time the process ξ N (•) spends in A in the time-interval [0, t]: where χ A represents the characteristic function of the set A. Denote by S A (t) the generalized inverse of T A (t): The trace of ξ N (•) on A, denoted by (ξ A N (t) : t ≥ 0), is defined by It is an A-valued, continuous-time Markov chain, obtained by turning off the clock when the process ξ N (•) visits the set A c , that is, by deleting all excursions to A c .For this reason, it is called the trace process of ξ N (•) on A. Let Ψ N : E N → S be the projection given by The order process (Y N (t) : t ≥ 0) is defined as Denote by Q N η , η ∈ E N , the probability measure on D(R + , S) induced by the measure P N η and the order process Y N .The definition of metastability relies on two conditions.Let L be a generator of an S-valued, continuous-time Markov chain.Denote by Q L x , x ∈ S, the probability measure on D(R + , S) induced by the Markov chain whose generator is L and which starts from x.
The next condition asserts that the process ξ N (•) spends a negligible amount of time on ∆ N on each finite time interval.It ensures that the trace process does not differ much from the original one when starting from a well.
The next definition is taken from [6].
The first main result of this article provides sufficient conditions, expressed in terms of properties of the solutions of resolvent equations, for condition C L to hold.The second one asserts that these sufficient conditions are also necessary.
Fix a function g : S → R, and let G N : H N → R be its lifting to H N given by Note that the function G N is constant on each well E x N and vanishes on ∆ N .For λ > 0, denote by N the unique solution of the resolvent equation Condition R L .For all λ > 0 and g : S → R, the unique solution F N of the resolvent equation (2.5) is asymptotically constant in each set E x N : lim where f : S → R is the unique solution of the reduced resolvent equation Remark 2.2.Condition R L is usually proved in two steps.One first show that for every λ > 0, g : S → R the solution F N of the resolvent equation (2.5) is asymptotically constant on each well.In other words, that (2.6) holds for some f .Then, one proves that (λ − L)f = g for some generator L.
The first main result of the article reads as follows.
Theorem 2.3.The process ξ N ( • ) is L-metastable if, and only if, condition R L is fulfilled.In other words, Conditions D and C L hold if, and only if, condition R L is in force.
Remark 2.4.This result provides a new tool to prove metastability.The existing methods rely on explicit computations involving the stationary state.In particular, they can not be applied to non-reversible dynamics whose stationary states are not known explicitly.For example, to small perturbations of dynamical systems or to the superposition of Glauber and Kawasaki dynamics.Proving that the solution of a resolvent equation is constant on the wells might be proven without turning to the stationary state.
Remark 2.5.The introduction of the set ∆ N which separates the wells makes condition R L plausible.The challenge is to tune correctly ∆ N .Sufficiently large to prove R L , but small enough for D to hold.
Remark 2.6.Solving the martingale problem through a resolvent equation, instead of a Poisson equation, simplifies considerably the proof of the metastable behavior of the process.As the solutions of the resolvent equations are bounded (cf.(4.2)), one can hope to obtain bounds and convergence in L ∞ , as we do here, instead of in L 2 .Moreover, many L 1 -estimates simplify substantially due to the L ∞ -bound on the solution of the resolvent equation.
Remark 2.7.Condition R L being necessary and sufficient for metastability implies that it holds for all models whose metastable behavior have been derived so far.The reader will find in [17,34] a list of such dynamics.
2.1.Applications.To convince the skeptical reader that condition R L is not too stringent, besides the fact, alluded to before, that it is also necessary for metastability, we provide two frameworks where this condition can be proven.First, Theorem 2.8 states that condition R L follows from properties (H0) and (H1), introduced in [6,7].Then, in the next section, to illustrate how to prove condition R L when assumption (H1) is violated, we prove that it holds for critical condensing zero-range processes.
The statement of Theorem 2.8 requires some notation.Denote by r N (x, y) the mean-jump rate between the sets E x N and E y N : In this formula, τ A , τ + A , A ⊂ H N , stand for the hitting, return time of the set A, respectively: (2.9) Condition (H0).For all x = y ∈ S, the sequence r N (x, y) converges.Denote its limit by r(x, y): r(x, y) = lim Let D N (F ) be the Dirichlet form of a function F : H N → R with respect to the generator L N : A summation by parts yields that Fix two disjoint and non-empty subsets A and B of H N .The equilibrium potential between A and B with respect to the process ξ N (•) is denoted by h A, B and is given by The capacity between A and B is given by cap Condition (H1).For each x ∈ S, there exists a sequence of configurations (ξ x N : Theorem 2.8.Assume that condition (H0), (H1) are in force.Then, the solution F N of the resolvent equation (2.5) is asymptotically constant on each well E x N in the sense that lim Furthermore, let f N : S → R be the function given by and let f be a limit point of the sequence f N .Then, for all y ∈ S such that µ N (∆ N )/µ N (E y N ) → 0, in which g is the function in equality (2.4).In this formula, L Y is the generator of the continuous-time Markov process whose jump rates are given by r(x, y), introduced in (H0).
Remark 2.11.Lemma 7.4 provides a sufficient condition for the identity (2.11) to hold in the case where µ N (∆ N )/µ N (E y N ) does not vanish asymptotically.The rest of the article is organized as follows.In Section 3, we introduce the critical zero-range process and state, in Theorem 3.2, that it fulfills conditions R L and D. In Section 4, we prove Theorem 2.3.In Sections 5-7, we prove Theorem 2.8 and provide further different sets of sufficient conditions, namely, conditions V and M, for D or R L to hold.These families of sufficient conditions were designed to encompass most dynamics whose metastable behavior have been derived so far.Sections 8-12 of the article are devoted to the proof of Theorem 3.2.

Critical Zero-range Dynamics
In this section, we introduce the critical condensing zero-range process to which we apply the resolvent approach described in the previous section.Fix a finite set S with |S| = κ ≥ 2 elements, and consider a continuous-time Markov chain on S with generator L X acting on functions f : S → R as for some jump rate r : S × S → R + assumed to be symmetric [r(x, y) = r(y, x) for all x, y ∈ S].Set r(x, x) = 0 for all x ∈ S for convenience.Denote by (X(t)) t≥0 the Markov chain generated by L X and assume that this chain is irreducible.Note that the process X(•) is reversible with respect to the uniform measure m(•) on S [m(x) = 1/κ for all x ∈ S].
The zero-range process describes the evolution of particles on S. A configuration η ∈ N S of particles is written as η = (η x ) x∈S where η x represents the number of particle at x under the configuration η.For N ∈ N and S 0 ⊂ S, denote by H N, S0 ⊂ N S0 the subset of configurations on S 0 with exactly N particles: Let H N = H N, S .The critical zero-range process is the continuous-time Markov chain {η N (t)} t≥0 on H N with generator acting on functions F : where g(0) = 0 , g(1) = 1 , and g(n In this equation, σ x, y η, x, y ∈ S, stands for the configuration obtained from η by moving a particle from x to y, when there is at least one particle at x: 3.1.Condensation of particles.It is elementary to check that the unique invariant measure for the irreducible Markov chain η N (•) is given by , and where the partition function Z N, κ is defined by The factor N/(log N ) κ−1 was introduced so that Z N, κ has a non-degenerate limit when N tends to infinity: By [41, Proposition 4.1], lim N →∞ Z N, κ = κ.Furthermore, the zero-range process η N (•) is reversible with respect to µ N (•).Define the metastable well as Assume that N = N/ log N for simplicity.The set E x N can be regarded as a collection of configurations in which almost all particles are sitting at site x.As defined previously, let Hence, as N → ∞, under the invariant measure, almost all particles are condensed at a single site.In this sense, the critical zero-range process η N (•) condensates.The main result of the article describes the evolution of the condensate.
This model is said to be "critical" for the following reason.Suppose that we replace g(η x ) in (3.2) by [g(η x )] α for some α > 0. It is known that the condensation phenomenon occurs if α ≥ 1, while a diffusive behavior without condensation is observed if α < 1.For this reason the zero-range process η N (•) is said to be critical at α = 1.
3.2.Order process.Let θ N = N 2 log N be the time-scale at which the condensate moves, and denote by ξ N (•) the process obtained by speeding up the zero-range process η N (•) by θ N , i.e., ξ N (t) = η N (t θ N ) for all t ≥ 0. Note that the process ξ N (•) is the H N -valued, continuous-time Markov chain whose generator is given by Denote by P N η the probability measure on D(R + , H N ) induced by the process ξ N (•) starting from η ∈ H N and by E N η the expectation with respect to P N η .Recall from (2.2), (2.3) the definition of the trace process (ξ E N N (t)) t≥0 , of the projection Ψ N : E N → S, and of the order process (Y N (t)) t≥0 .For critical zerorange processes, the order process Y N (•) specifies the position of the condensate for the trace process ξ E N N (t).Recall that Q N η , η ∈ H N , denotes the probability law on D(R + , S) induced by the order process Y N (•) when the underlying zero-range process ξ N (•) starts from η.

Main result.
We first introduce the S-valued Markov chain (Y (t)) t≥0 describing the evolution of the condensate.Denote by τ X C , C ⊂ S, the hitting time of the set C with respect to the random walk X(•) introduced above.
x ∈ S, be the law of the process X(•) starting at x.For two non-empty, disjoint subsets A, B of S, the equilibrium potential between A and B with respect to the process The capacity between A and B is given by cap where D X (•) stands for the Dirichlet form associated to the process X(•), which can be written as for f : S → R. If the sets A, B are singletons, we write cap X (x, y) instead of cap X ({x}, {y}).
Denote by (Y (t)) t≥0 the S-valued, continuous-time Markov chain associated to the generator L Y acting on f : S → R as Recall from the previous section that we denote by Q L Y x , x ∈ S, the probability measure on D(R + , S) induced by the Markov chain Y (•) starting from x. Sometimes, we represent x .The next theorem is the third main result of the article.Theorem 3.2.Conditions R L Y and D hold for the critical zero-range process, where L Y is given by (3.7).In particular, the critical zero-range process is L Ymetastable.
In view of Theorem 2.3, this result establishes that, in the time-scale θ N = N 2 log N , the condensate evolves as the Markov chain Y (•) and that, with the exception of time intervals whose total length is negligible, almost all particles sit on a single site.
Remark 3.3.The so-called martingale approach developed in [6,7] to derive the metastable behavior of a Markov process, based on potential theory, does not apply here because the process does not visit all points of a well before jumping to a new one, and condition (H1) of [6] is violated.This characteristic is the main difference between the critical zero-range process and the super-critical ones.
Remark 3.4.By using the so-called Poisson equation approach developed in [45,54,56], we proved in [41] a weaker version of Theorem 3.2.Denote by µ x N (•), x ∈ S, the measure on E x N obtained by conditioning µ N on E x N : We assumed in [41] that the initial distribution is a measure ν N concentrated on a set E x N for some x, and satisfying the following L 2 -condition: there exists a finite constant C 0 such that The main novelty of Theorem 3.2 lies in the fact that it removes assumption (3.8) and allows the process to start from a fixed configuration inside some well.
Remark 3.5.The proof of Theorem 3.2 relies on many estimates obtained in [41].
In particular, on the construction of a super-harmonic function inside the wells.
Remark 3.6.The equilibration inside the well, or the loss of memory, is obtained in two different manners.First, we derive a sharp bound on the relaxation time of the process reflected at the boundary of a well.This relaxation time is shown to be much smaller than the metastable time-scale θ N .Then, we show that the process visits the bottom of the well before visiting a new well.This crucial property is derived with the help of the super-harmonic function alluded to above.Thus, although the process does not visit all configurations in a well before reaching a new one, it visits a specific configuration.
Remark 3.7.The symmetry of the jump rates r of the chain X is used in the construction of the super-harmonic function.Theorem 3.2 should be in force without this assumption, but a proof is missing.
The proof of Theorem 3.2 relies on Theorem 2.3.The strategy is presented in Section 8 and the details in Sections 9-12.

Proof Theorem 2.3
In the first part of this section, we show that condition R L implies assertions C L and D. In the second part, we prove the reverse statement.
In particular, there exists a finite constant The next result asserts that condition R L implies assertion D.
Lemma 4.1.Assume that condition R L holds.Then, assertion D is in force.
Proof.We first claim that for all λ > 0, lim Indeed, fix λ > 0 and g : S → R given by g(x) = 1 for all x ∈ S. Let G N , F N be given by (2.4) and (2.5), respectively.By (4.1) and since Since the solution f of the reduced resolvent equation (λ − L)f = g is f (x) = 1/λ for all x ∈ S, claim (4.3) follows from (2.6).Fix t > 0, λ > 0, and observe that for all η ∈ H N .Hence, condition D follows from (4.3).
We prove some consequences of condition R L .The next result asserts that the process ξ N (•) can not jump from one well to another quickly.The proof of this result is similar to the one of [56,Proposition 5.2].Recall from (2.9) that we denote by τ A , A ⊂ H N , the hitting time of the set A. Let Proof.Fix λ > 0, x ∈ S and η N ∈ E x N .Let f : S → R be the function given by f (y) = 1 − δ x,y .Set g = (λ − L)f , and denote by F N the solution of the resolvent equation (2.5).Let M N (t) be the martingale defined by where τ = τ Ȇx N .By condition R L and the definition of f , lim N →∞ F N (η N ) = 0.By (4.2) and by definition of G N , λF N − G N is bounded.The right-hand side of (4.6) is thus bounded by a N + C 0 t for some finite constant C 0 and a sequence a N such that a N → 0.
We turn to the left-hand side of (4.6).Since f ≥ 0, by condition R L , there exists a constant c N ≥ 0 such that c N → 0 and To prove this claim, let ζ be a configuration at which F N achieves its minimum value so that ( The left-hand side of (4.6) is equal to Hence, the left-hand side of (4.6) is bounded below by (1/2) Putting together the previous estimates yields that To complete the proof of the lemma, it remains to remark that The next result states that the sequence (Q N η N ) N ∈N is tight.Proposition 4.3.Assume that condition D and (4.5) hold.Then, the sequence (Q N η N ) N ∈N is tight, and any limit point Proof.This result follows from conditions D, (4.5) and Aldous' criterion.We refer to [41,Theorem 5.4] for a proof.
Recall from (2.1) the definition of the time-change S A (t), A ⊂ H N .Clearly, for all t ≥ 0, In contrast, we only have S A T A (t) ≥ t and a strict inequality may occur.Furthermore, for all t > 0 and > 0, Indeed, if S A (t) ≥ t + , applying T A on both sides of this inequality, as T A is an increasing function, by (4.8), This last relation corresponds exactly to the right-hand side of (4.9).Denote by {F 0 t } t≥0 the natural filtration of D(R + , H N ) generated by the process ξ N (•), F 0 t = σ(ξ N (s) : s ∈ [0, t]), and by {F t } t≥0 its usual augmentation.Let G N t be the filtrations defined by Note that these expressions are positive since S E N (r) ≥ r for all r ≥ 0.
Proof.To prove the first assertion, note that the expectation is bounded by where K λ (a) = 1 − e −λa .Fix > 0. As K λ is continuous, there exists δ > 0 such that K λ (a) ≤ for all 0 ≤ a ≤ δ.Since K λ is bounded by 1, the previous expectation is less than or equal to By (4.9) and Chebyshev's inequality, this expression is bounded by At this point, the first claim of the lemma follows from condition D by taking the limit N → ∞ and then → 0. The proof of the second assertion is similar.The expectation is equal to As r → S E N (r) − r and K λ are increasing maps, this expectation is bounded by At this point, the second assertion of the lemma follows from the first one.
The next result establishes the uniqueness of limit points of the sequence Q N η N .
Proposition 4.5.Assume that condition R L is in force.Fix x ∈ S and a sequence (η N ) N ∈N such that η N ∈ E x N for all N ∈ N. Let Q * be a limit point of the sequence Q N η N which satisfies (4.7).Then, Q * = Q L x .Proof.Fix λ > 0, a function f : S → R, and let g = (λ − L)f .Denote by F N the solution of (2.5).Under the measure P N η N , the process M N (t) given by is a martingale with respect to the filtration {F t } t≥0 defined above (4.10).By (2.5), we may replace ( λ Recall the definition of the filtration {G N t } t≥0 from (4.10).Since S E N (t) is a stopping time with respect to F t , the process M N (t) = M (S E N (t)) is a martingale with respect to the filtration The presence of the indicator of the set E N in the integral permits to perform the change of variables r = T E N (r).Hence, by (4.8), By definitions of G N , Y N (•), by condition R L and by Lemmata 4.1 and 4.4, where, for all t > 0, lim and let Q * be a limit point of the sequence Q N η N satisfying the hypothesis of the proposition.As M N (t) is a martingale and η N ∈ E x N , by (4.11), To complete the proof, it remains to appeal to the uniqueness of solutions of martingale problems in finite state spaces.
We are now in a position to prove that condition R L entails C L and D. Proof: The statement follows from Lemma 4.1 and Propositions 4.3 and 4.5.

4.2.
Conditions C L and D imply R L .Recall equation (4.1) for F N .Since G N vanishes on ∆ N , we may rewrite this identity as As the chain ξ N (t) is irreducible, lim t→∞ T E (t) = ∞.Hence, by the change of variables t = T E (t), where the absolute value of the remainder R (1) Note that the convergence is uniform in E x N because we may consider a subsequence η N ∈ E x N of initial conditions which attains the maximum and apply condition C L to this sequence.By (4.1), the second term in the previous formula is f (x), where f is the solution of (2.7).
To complete the proof of the theorem, it remains to show that the remainder R (1) N (η) converges uniformly to 0. This is a consequence of the second assertion of Lemma 4.4.

Potential theory
We review below some results on potential theory used in the next three sections.The notation is the one introduced in Section 2. Recall that we represent by R N : H N × H N → [0, ∞) the jump rates of the process ξ N (•), and by λ N (η) = ζ =η R N (η, ζ) the holding times.We adopt the convention that the jump rates vanish on the diagonal: R N (η, η) = 0 for all η ∈ H N .Denote the jump probabilities by p We represent by Denote by L † N the adjoint of the generator L N in L 2 (µ N ).It is well known that L † N is the generator of a H N -valued, continuous-time Markov chain, represented by ξ † N (•).The jump rates, holding times and jump probabilities of this process are denoted by R † N (η, ζ), λ † N (η) and p † N (η, ζ), respectively.For a probability measure ν on H N , we denote by P †,N ν the measure on D(R + , H N ) induced by ξ † N (•) starting from ν. Expectation with respect to P †,N ν is represented by E †,N ν .
Fix two disjoint and non-empty subsets A and B of H N .The equilibrium potential between A and B with respect to the process ξ N (•) has been introduced in (2.10).The one for the adjoint process ξ † N (•) is denoted by h † A, B : H N → [0, 1] and is given by Recall from (2.8) the definition of the mean-jump rates between E x N and E y N for the process ξ N (•).The ones for the adjoint process ξ † N (•), represented by r † N (x, y), are defined analogously.Since the holding times of the adjoint process coincide with the original ones, λ † N (η) = λ N (η), r † N (x, y) is equal to the right-hand side of (2.8) with P N η replaced by P †,N η .The first result of this section establishes an elementary identity between mean jump rates of the process and its adjoint.Recall that Ȇx N has been introduced in (4.4).
Lemma 5.1.For all x = y ∈ S, Proof.By the definition (2.8) of the jump rates r N (x, y), . This measure is invariant for the embedded, discretetime Markov chain.With this notation, the right-hand side can be written as Reversing the trajectory, this sum is seen to be equal to which proves the first assertion of the lemma.
To prove the second one, note that By equation (2.4) and Lemma 2.3 in [24], cap N ( Ȇx , where this later expression represents the capacity with respect to the adjoint process.To conclude the proof, it remains to rewrite the same two identities for the adjoint process.

Conditions (H0), (H1).
Recall from Section 2 the statement of these conditions.We present below some consequences of them.The next result is [6, Proposition 5.10], which essentially asserts that the process hits every configuration inside a metastable set before arriving at another metastable set.Lemma 5.2.Assume that condition (H1) is in force.Fix x ∈ S and a sequence The next result asserts that, starting from a well E x N , the process ξ N (•) visits any point in E x N quickly.Lemma 5.3.Assume that conditions (H0) and (H1) are in force.Fix x ∈ S, and let (ζ N : N ≥ 1) be a sequence of configurations such that ζ N ∈ E x N for all N ≥ 1.Then, for all δ > 0, lim sup Proof.Fix a sequence (η N : N ≥ 1) such that η N ∈ E x N for all N ≥ 1.By [7, Theorem 2.1], the process Y N (•) converges to Y (•).The assertion of the lemma follows from this fact, Lemma 5.2 and [6, Lemma 3.1].
In the reversible case, the mean jump rate r N (•, •) can be expressed in terms of capacities: By [6, Lemma 6.8], r N (x, y) is equal to in which E N (S \ {x, y}) = ∪ z∈S\{x, y} E z N .Hence, estimating the mean-jump rates boils down to that of the capacity between metastable wells, which can be achieved by using the variational characterizations of capacities, known as the Dirichlet and the Thomson principles [34].In the non-reversible case, a robust strategy of estimating mean-jump rates via capacities between wells has also been developed in [7,35,44].
We complete this section with a formula for the average of equilibrium potentials.Fix two disjoint and non-empty subsets A and B of H N .According to [7,Proposition A.2], where ν † A,B is the equilibrium measure between A and B: (5.4)

The solutions of the resolvent equation
Theorem 2.3 asserts that a sequence of Markov processes is metastable if conditions R L and D are fulfilled.In this section and in the next, we present sufficient conditions for R L and D to hold.We start by dividing the condition R L into two sub-conditions, namely, conditions R (1) and R (2) L .
In this section, we present two mixing properties, assumptions V and M, which imply condition R (1) .As a by-product, we show that condition M implies condition L to the next section.
Condition R (1) .The solution F N of the resolvent equation (2.5) is asymptotically constant on each well E x N in the sense that lim Remark 6.1.Clearly, the condition R (1) is satisfied if the wells E x N are singletons as in the Ising model under Glauber dynamics [19] or the simple inclusion process [10,32].
6.1.Visiting condition V.The first condition is build upon the existence in each well of a configuration which is visited in a time-scale much shorter than the metastable one.
for all s > 0, y ∈ S.
The next result asserts that this property is a sufficient condition for R (1) to hold.Its proof is postponed to the end of the subsection.Proposition 6.2.Condition V implies condition R (1) .Remark 6.3.Condition (6.1) requires the process to visit the bottom of the well quickly.It is weaker than (H1), which implies that the process visits all configurations in a well before jumping to a new one.Actually, Proposition 10.1 asserts that a stronger version of condition (6.1) holds for reversible, critical zero-range processes, a model which does not satisfy condition (H1).Corollary 6.4.Assume that condition (H0), (H1) are in force.Then, R (1) holds.
We turn to the proof of Proposition 6.2.We start showing that we may mollify the solution with the semigroup (P N (t) : t ≥ 0) associated to the generator L N .Lemma 6.6.For all T > 0, Proof.Fix T > 0 and 0 < t ≤ T .By the representation (4.1) of F N , By a change of variables, the right-hand side can be rewritten as The first term is equal to F N (η) + R N , where the absolute value of the remainder R N is bounded by t G N ∞ .As 1 − e −a ≤ a, a ≥ 0, the second term is bounded by t G N ∞ .Proposition 6.7.If the mixing property M is satisfied, then condition R (1) holds.Remark 6.8.Barrera and Jara [5] proved that the mixing time of small random perturbations of dynamical systems satisfying certain regularity assumptions, is of polynomial order.Since the hitting time of the boundary is exponentially large [23], the previous result applies to this setting.
The proof of Proposition 6.7 relies on a simple estimate between the semigroup of the original process and the semigroup of the reflected one.Lemma 6.9.For each x ∈ S, η ∈ E x N and t > 0, (P N , and write (P In the first term, we may replace the process ξ N (•) by the reflected one since the process remained in the set V x N in the time-interval [0, t].The second term is bounded by Writing the indicator function of the set {τ (V x N ) c > t} as 1 minus the indicator of the complement, we conclude the proof of the lemma.
Proof of Proposition 6.7.By Lemmata 6.6 and 6.9 with T = t = h N , and hypothesis (6.3) lim Fix x ∈ S, η ∈ E x N and > 0. By definition of the total variation distance, ) .(6.5)By the contracting property of the semigroup, ) is decreasing in t, and thus by (6.4), the right-hand side of (6.5) is bounded from above by by definition of the mixing time.This completes the proof of the proposition because the sequence (F N ) is uniformly bounded in N .

Local equilibration and condition D.
The same argument shows that condition M yields a fast local equilibration inside each well.In particular, condition D results from assumption M and the property that Consider a uniformly bounded sequence of functions Recall the definition of the probability measure µ x N , introduced in Remark 3.4.
Proposition 6.10.Assume that condition M is in force.Then, for all x ∈ S, T > 0, sup where the error term o N (1) at the right-hand side is uniform over N , M and T .
Proof.Fix x ∈ S, η ∈ E x N , and let (6.7)By (6.4), there exists a sequence ( N : N ≥ 1) such that lim N N = 0 and By (6.3), this expectation is equal to .
By the Markov property and the definition of q N , we may write the previous expectation as Recall that we denote by ξ R,x N (•) the reflected process at the boundary of V x N .Denote by P R,x η the law of the reflected process ξ R,x N (•), and by E R,x η the expectation with respect to P R,x η .Due to the presence of the indicator of the set {τ (V x N ) c > h N }, we may replace in the previous expectation ξ N (h N ) by ξ R,x N (h N ) and then remove the indicator of that set.After these modifications the previous expression becomes . By definition of s N and since s N ≤ h N , the expectation is equal to The assertion of the proposition follows from this bound by averaging ζ according to µ x N .Corollary 6.11.Assume that condition M is in force.Then, for all x ∈ S, T > 0, sup In particular, if µ N (∆ N )/µ N (E x N ) → 0 for all x ∈ S, then condition D holds.
Proof.By the proposition, for every x ∈ S, η ∈ E x N and T > 0, The expectation is bounded by where the last identity follows from the fact that µ N is the stationary state.
7. Proof of Theorem 2.8 In this section, we examine the possible limits of the average of the solutions of the resolvent equation (2.5) in each well and prove Theorem 2.8.Most of the notation is borrowed from Section 5.
Recall from the statement of Theorem 2.8 the definition of the function f N .Note that condition R (1) holds if and only if lim L .Let L be the generator of an S-valued, continuous-time Markov chain.For all x ∈ S, lim where f : S → R is the solution of the reduced resolvent equation It is clear that R (1) and R L together imply condition R L .By (4.2) and the definition of f N , there exists a finite constant C 0 = C 0 (λ, g) such that sup Let L be the generator of the S-value Markov chain induced by the rates r introduced in condition (H0): and let ν † y = ν † E y N , Ȇy N , y ∈ S, be the equilibrium measure between E y N and Ȇy N , as defined in (5.4).Proposition 7.2.Assume that conditions (H0) and R (1) are in force.Let f be a limit point of the sequence f N .Then, for all y ∈ S such that lim By Lemma 5.1, this expression can be rewritten as x =y r N (y, x) .
By the same reasons, the sum of the first two terms in (7.3) is equal to Recollecting all previous calculations and dividing by µ N (E y N ) permits to rewrite (7.2) as where the absolute value of R (2) for some finite constant C 0 = C 0 (λ, g).By (5.3), with A = E y N , B = Ȇy N , and the second assertion of Lemma 5.1, this expression can be rewritten as for a possibly different constant C 0 .To conclude the proof, it remains to recall the statement of conditions (H0), R (1) , and the hypotheses of the proposition.
In the previous proof we used the identity, In particular, (7.1) holds for y ∈ S if and only if the right-hand side vanishes as N → ∞.
Corollary 7.3.Assume that conditions (H0) and R (1) are in force.Let f be a limit point of the sequence f N .Then, for all y ∈ S such that µ N (∆ N )/µ N (E y N ) → 0. In this formula, L Y is the generator of the continuous-time Markov process whose jump rates are given by r(x, y), introduced in (H0).
Proof.The right-hand side of (7.4) is bounded by µ N (∆ N )/µ N (E y N ).Thus, the assertion of the corollary follows from the statement of Proposition 7.2.
We complete this section with a method to prove condition (7.1) when the hypotheses of Corollary 7.3 are not verified.The idea behind the decomposition below is that A N is contained in the basin of attraction of Ȇy N .In particular, starting from a configuration in A N the set Ȇy N is reached quickly.We refer to Figure 1 for an example of illustration of the set A N .Then, (7.1) holds for y.
Proof.In equation (7.1), write χ ∆ N as χ A N + χ ∆ N .We estimate the two pieces separately.By (7.4) with ∆ N instead of ∆ N , • By hypothesis, this expression vanishes as N → ∞.
On the other hand, starting the integral from the hitting time of A N and applying the strong Markov property, yields that By assumption this expression vanishes as N → ∞, which completes the proof of the lemma.

Proof of Theorem 3.2
In view of Theorem 2.3, to prove Theorem 3.2, we have to show that conditions D and R L hold.The proof of these properties is based on the theory developed in the previous sections.We proceed as follows.
Condition R L .In Proposition 10.1, we show that condition V is fulfilled.Hence, by Proposition 6.2, property R (1) holds.
In Corollary 12.2 we show that condition (H0) holds.Since we already proved that condition R (1) is fulfilled, and since, by Theorem 3.1, µ N (∆ N )/µ N (E x N ) → 0 for all x ∈ S, by Corollary 7.3, property R L Y is in force, where L Y is the generator introduced in (3.7).Condition D. Recall assumptions (6.3) and (6.4) of condition M. In Corollary 9.2, we show that condition (6.3) holds for some enlarged wells V x N and a time-scale h N 1.Then, in Proposition 11.1, we prove that, for every > 0, the mixing time t x mix ( ) of the zero-range process reflected at the boundary of the set V x N is bounded by a sequence s N h N .This property implies condition (6.4).These two results yield condition M, which is the assertion of Corollary 11.2.Thus, by Theorem 3.1 and Corollary 6.11, property D is fulfilled.
Remark 8.1.To deduce property R (1) one could also invoke Corollary 11.2 and Proposition 6.7.On the other hand, condition R

Escape from large wells
In this section, we prove that condition (6.3) holds for the critical zero-range process for a sequence (h N ) N ∈N , h N → 0, and enlarged wells (V and let h N be the macroscopic time-scales given by For x ∈ S, define a larger well by As in [41], denote by W x N , D x N , x ∈ S, the wells given by W 3) In this formula, γ ∈ (0, 2/κ) is a fixed constant.The sets D x N are called deep wells and the sets W x N shallow wells.Denote by ζ x N ∈ H N the configuration such that all N particles are located at site x, so that The main result of this section asserts that the process ξ N (•) starting from a well E x N cannot escape from the well W x N within the time scale h N .Proposition 9.1.For all x ∈ S, By the last inclusion of (9.4), the next result is a straightforward consequence of Proposition 9.1.Corollary 9.2.For all x ∈ S, 9.1.Estimates based on capacity.In this subsection, we state several estimates based on the following bound of the capacity between E x N and (W x N ) c with respect to the critical zero-range processes.
Lemma 9.3.There exists a finite constant C such that Proof.Let Q(η) = q(N − η x ) for some function q : Z → R such that The precise expression for q will be specified below in (9.5).By the Dirichlet principle and since By definition of the jump rates and of Q, this expression is bounded by for some finite constant C. Let It follows from the penultimate displayed equation that cap Here, we used the facts that Z N, κ , Z i, κ−1 are bounded, that N/(N − i) 1 and log For the sake of completeness, we recall its definition and main properties below.With this notation, let Recall, from (3.5), that h A, B (•) and cap X (•, •) represent the equilibrium potential and the capacity, respectively, associated to the underlying random walk X.For each non-empty subset A of S 0 , consider the sequence (b A x, y ) x, y∈S defined by For each non-empty subset A of S 0 , define the quadratic function P A (•) as By [41,Lemma 10.8], Fix A S 0 .For each constant c A > 0 and positive integer ≥ 1, let P A : U x0 N → R be given by P The dependence of P A on the constant c A is omitted from the notation.Taking P ∅ (η) = 0 for all η ∈ U x0 N , define the corrector function W : By [41, Lemma 10.10], there exists a constant 0 < C < ∞ such that Hence, by (10.2) and the previous bound, P S0 (η) − W (η) > 0 for all η ∈ U x0 N .
On the other hand, if W (η) = P A (η) and W (σ x, y η) = P B (σ x, y η) for some A = B, by definition of W and (10.6), This completes the proof of (10.5), and the one of the lemma.10.3.Hitting times of the reflected process.In this subsection, we establish, in Lemma 10.6 below, that the assertion of Proposition 10.1 holds for the reflected process ξ x N (•).The first result asserts that the process ξ x N (•) hits the set D x N quickly when it starts from a configuration in E x N .Lemma 10.4.There exists C > 0 such that, for all x ∈ S, N ≥ 1, sup Proof.By the martingale formulation, for every t > 0, By Lemma 10.3, there exists a positive constant C, whose value may change from line to line, such that On the other hand, by (10.4), G x N is non-negative.Therefore, by the next to last displayed equation, By (10.4), there exist a finite constant To complete the proof of the lemma, it remains to let t → ∞.
The next result asserts that ξ x N (•) hits the configuration ζ x N quickly when it starts from a configuration in D x N .
Lemma 10.5.There exists a finite constant C such that, for all x ∈ S and N ≥ 1.
Proof.If η = ζ x N , there is nothing to prove.For η = ζ x N , we recall the well-known identity (cf.[6, Proposition 6.10]) where h N, x η, ζ x N and cap x N (η, ζ x N ) denote the equilibrium potential and the capacity between η and ζ x N with respect to the reflected process ξ x N (•), respectively, and where µ x N (•) denotes the invariant measure conditioned on W x N , i.e., .
Observe that µ x N (•) is the invariant measure of the reflected process ξ x N (•).Applying the trivial bound h N, x η, ζ x N ≤ 1 to (10.7), we get Actually, in [41,Lemma 9.3] this bound is proved for the capacity with respect to the original zero-range process, but the same proof applies to the reflected process.To complete the proof, it remains to combine the previous bounds.
Lemma 10.6.For all x ∈ S, Proof.By Lemmata 10.4, 10.5, and the strong Markov property, since we assumed that γ < 2/κ.The assertion of the lemma follows from Chebyshev inequality.
10.4.Proof of Proposition 10.1.Consider the canonical coupling of the zerorange process ξ N (•) and the reflected process ξ x N (•) starting together at η ∈ W x N .The two processes move together until ξ N (•) hits (W x N ) c .From this point on, they move independently according to their respective dynamics.By Proposition 9.1, starting from E x N , we can couple the original zero-range process and the reflected process ξ x N (•) up to time h N with a probability close to 1.The joint law of ξ N (•) and ξ x N (•) under this canonical coupling is represented by P N, x η .Denote by τ A and τ A the hitting time of a set A with respect to ξ N (•) and ξ x N (•), respectively.
Proof of Proposition 10.1.Recall the definition of the sequence u N introduced at the beginning of Section 10, and the one of h N presented in (9.1).Fix η ∈ E x N .By Proposition 9.1, . Recall the canonical coupling introduced above.On the event {τ (W x N ) c > h N }, the two processes ξ N (t) and ξ x N (t) move together until h N .Since u N h N , on the previous event, the sets {τ ζ x N ≥ u N } and { τ ζ x N ≥ u N } coincide.Thus, , by Lemma 10.6, this quantity vanishes as N → ∞.It remains to combine the previous estimates.

Condition M for critical zero-range processes
In this section, we prove condition (6.4) for a time-scale s N h N and the large wells V x N introduced in (9.2).For N ≥ 1, define Note that s N h N .Recall from Subsection 6.2 and equation (6.2) the definitions of the reflected process ξ R,x N (•) and of the total variation distance d x TV (•, •).For t ≥ 0, let D x TV (t) := sup η∈V x N d x TV ( δ η P R,x N (t) , π R,x ) .
Note here that we prove a stronger version of mixing than the one required in the condition M since the supremum in the definition D x TV (t) is taken over all configurations in V x N .Proposition 11.1.For all x ∈ S, lim N →∞
It follows from this result that for all > 0 the mixing time t x mix ( ) is bounded by s N for N sufficiently large.In particular, condition (6.4) holds because s N h N .
Corollary 11.2.The condition M holds for the critical zero-range processes.
Proof.The proof follows from Corollary 9.2, Proposition 11.1, and the fact that s N h N .
The proof of Proposition 11.1 is divided into several steps.We first show that the process ξ R,x N (•) hits the configuration ζ x N in the time-scale u N .The reasoning carried out in the proof of Lemma 10.6 does not apply to the process ξ R,x N (•) because Lemma 10.3 does not hold for it.We present below an alternative argument, based on Propositions 9.1 and 10.Finally, the test function F S1 : H N → R is defined by The main property of this test function is the following lemma.
We omit the proof of this lemma since it is identical to those of [8, (5.11), (5.12), Proposition 5.3].Even if they proved these results for α > 1, the same argument also holds for α = 1.The only different part is [8,Lemma 5.2] which is used in the proof of [8, (5.11)].We substitute this lemma by the following lemma.Letting → 0 completes the proof.
12.3.Auxiliary Lemmata.In this subsection we prove two technical lemmata.The first one is used in the proof of Proposition 12.3.
Lemma 12.7.For all c > 0 and x, y ∈ S, we have

4. 1 .
Condition R L entails C L and D. We first show that the solution of the resolvent equation is bounded.Fix a function g : S → R and λ > 0. It is well known that the solution of the resolvent equation (2.5) can be represented as

Figure 1 .χ
Figure1.This picture illustrates the idea behind the statement of Lemma 7.4.To simplify, we argue in a continuous setting, but the same idea applies to the discrete setting.Consider a diffusion on the potential field appearing in the picture.The valley E y N is a metastable set and E z N a stable one.As µN (∆N )/µN (E y N ) does not converge to 0, we decompose ∆N as ∆ N ∪ AN , so that µN (∆ N )/µN (E y N ) → 0. On the other hand, as AN is a subset of the domain of attraction of the valley E z N , we can expect (7.5) to hold.

( 2 )
L Y has been proven in an alternative way in [41,Section 7].
and b A x, y = 0 otherwise.By elementary properties of the capacity and the equilibrium potential, b A x, y = b A y, x for all x, y ∈ S (cf.[41, Lemma 10.2]).Moreover, by [41, Lemma 10.3], b A x, y ≤ b B x, y for all x, y ∈ S (10.1) if A ⊂ B ⊂ S 0 .

1 .
Recall from Subsection 10.4 the definition of the canonical coupling of the zerorange process ξ N (•) and the reflected process ξ x N (•).The same definition permits to of unity Θ x y : D → [0, 1], y = x, in the sense that, y∈S\{x} Θ x y ≡ 1 on D and Θ x y ≡ 1 on K x y for y ∈ S \ {x} .With the constructions above, we define F x : H N → R as F x (η) = y∈S\{x} Θ x y (η/N ) F xy (η) .
.1.A super-harmonic function.We first establish, in Lemma 10.6 below, the estimate stated in Proposition 10.1 for the process which is reflected at the boundary of W x N .The proof of Lemma 10.6 is based on the construction, carried out in [41, Section 10], of a function G x N