Skip to main content
Log in

Tunneling and Metastability of Continuous Time Markov Chains II, the Nonreversible Case

  • Published:
Journal of Statistical Physics Aims and scope Submit manuscript

Abstract

We proposed in Beltrán and Landim (J. Stat. Phys. 140:1065–1114, 2010) a new approach to prove the metastable behavior of reversible dynamics based on potential theory and local ergodicity. In this article we extend this theory to nonreversible dynamics based on the Dirichlet principle proved in Gaudillière and Landim (arXiv:1111.2445, 2011). We also include in this article the proof of the metastability of a class of birth and death chains.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Beltrán, J., Jara, M., Landim, C.: Tunneling of the condensate in asymmetric zero range processes. (2012, in preparation)

  2. Beltrán, J., Landim, C.: Tunneling and metastability of continuous time Markov chains. J. Stat. Phys. 140, 1065–1114 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  3. Beltrán, J., Landim, C.: Metastability of reversible condensed zero range processes on a finite set. Probab. Theory Relat. Fields 152, 781–807 (2012)

    Article  MATH  Google Scholar 

  4. Beltrán, J., Landim, C.: Metastability of reversible finite state Markov processes. Stoch. Process. Appl. 121, 1633–1677 (2011)

    Article  MATH  Google Scholar 

  5. Beltrán, J., Landim, C.: Tunneling of the Kawasaki dynamics at low temperatures in two dimensions. arXiv:1109.2776 (2011)

  6. Bovier, A., Eckhoff, M., Gayrard, V., Klein, M.: Metastability in stochastic dynamics of disordered mean field models. Probab. Theory Relat. Fields 119, 99–161 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  7. Bovier, A., Eckhoff, M., Gayrard, V., Klein, M.: Metastability and low lying spectra in reversible Markov chains. Commun. Math. Phys. 228, 219–255 (2002)

    Article  MathSciNet  ADS  MATH  Google Scholar 

  8. Cassandro, M., Galves, A., Olivieri, E., Vares, M.E.: Metastable behavior of stochastic dynamics: a pathwise approach. J. Stat. Phys. 35, 603–634 (1984)

    Article  MathSciNet  ADS  MATH  Google Scholar 

  9. Freedman, D.: Markov Chains. Holden-Day, San Francisco (1971)

    MATH  Google Scholar 

  10. Gaudillière, A.: Condenser physics applied to Markov chains: a brief introduction to potential theory. Online available at http://arxiv.org/abs/0901.3053

  11. Gaudillière, A., Landim, C.: A Dirichlet principle for non reversible Markov chains and some recurrence theorems. arXiv:1111.2445 (2011)

  12. Jara, M., Landim, C., Teixeira, A.: Quenched scaling limits of trap models. Ann. Probab. 39, 176–223 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  13. Jara, M., Landim, C., Teixeira, A.: Universality of trap models in the ergodic time scale. Online available at http://arxiv.org/abs/1208.5675 (2012)

  14. Landim, C.: Metastability for a non-reversible dynamics: the evolution of the condensate in totally asymmetric zero range processes. arXiv:1204.5987

  15. Lebowitz, J.L., Penrose, O.: Rigorous treatment of the van der Waals–Maxwell theory of the liquid–vapor transition. J. Math. Phys. 7, 98–113 (1966)

    Article  MathSciNet  ADS  Google Scholar 

  16. Norris, J.R.: Markov Chains. Cambridge University Press, Cambridge (1997)

    MATH  Google Scholar 

  17. Olivieri, E., Vares, M.E.: Large Deviations and Metastability. Encyclopedia of Mathematics and Its Applications, vol. 100. Cambridge University Press, Cambridge, (2005)

    Book  Google Scholar 

Download references

Acknowledgements

The authors would like to thank the anonymous referees for their careful reading which helped to improve the presentation of this article.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to C. Landim.

Appendix: Potential Theory for Positive Recurrent Processes

Appendix: Potential Theory for Positive Recurrent Processes

We state in this section several properties of continuous time Markov chains used throughout the article. Consider a countable set E and a matrix R:E×E→ℝ such that R(η,ξ)≥0, ηξ, −∞<R(η,η)<0, ∑ ξ R(η,ξ)=0, ηE. Let λ(η)=−R(η,η). Since λ(η) is finite and strictly positive, we may define the transition probabilities {p(η,ξ):η,ξE} as

$$ p(\eta,\xi) = \frac{1}{\lambda(\eta)} R(\eta,\xi) \quad\mbox{for}\ \eta\neq \xi , $$
(A.1)

and p(η,η)=0 for ηE. We assume throughout this section that {p(η,ξ):η,ξE} are the transition probabilities of an irreducible and recurrent discrete time Markov chain, denoted by Y={Y n :n≥0}.

Let {η(t):t≥0} be the unique strong Markov process associated to the rates R(η,ξ). We shall refer to R(⋅,⋅), λ(⋅) and p(⋅,⋅) as the transition rates, holding rates and jump probabilities of {η(t):t≥0}, respectively. Since the jump chain Y is irreducible and recurrent, so is the corresponding Markov process {η(t):t≥0}. We shall assume throughout this section that η(t) is positive recurrent. In consequence, η(t) has a unique invariant probability measure μ. Moreover,

$$ M(\eta) := \lambda(\eta)\mu(\eta) , \quad\eta\in E , $$
(A.2)

is an invariant measure for the jump chain Y, unique up to scalar multiples. The proofs of these assertions can be found in Sects. 3.4 and 3.5 of [16]. We assume furthermore that the holding rates are summable with respect to μ:

$$ \sum_{\eta\in E} \lambda(\eta) \mu(\eta) < \infty, $$
(A.3)

so that M is a finite measure. Assumption (A.3) reduces the potential theory of continuous time Markov chains to the potential theory of discrete time Markov chains.

Let L 2(μ), L 2(M) be the space of square integrable functions f:E→ℝ endowed with the usual scalar product 〈f,g m =∑ ηE f(η)g(η)m(η), with m=μ, M, respectively. Denote by P the bounded operator in L 2(M) defined by

$$ (Pf) (\eta) = \sum_{\xi\in E} p(\eta,\xi) \bigl\{f(\xi) - f(\eta)\bigr\} $$
(A.4)

for fL 2(M), and by L the generator of the Markov process {η(t):t≥0}. Thus, for every finitely supported function f:E→ℝ,

$$ (Lf) (\eta) = \sum_{\xi\in E} R(\eta,\xi) \bigl\{f(\xi) - f(\eta)\bigr\} . $$

Let L be the adjoint of the operator L on L 2(μ). L is the generator of a Markov process {η (t):t≥0} with holding rates λ and jump rates R given by λ (η)=λ(η), R (η,ξ)=μ(ξ)R(ξ,η)/μ(η), so that μ(η)R (η,ξ)=μ(ξ)R(ξ,η). Let P be the adjoint of P in L 2(M). Clearly, the bounded operator P is given by (A.4) with p in place of p. Denote by \(\{Y^{*}_{n} : n\ge0\}\) the discrete time Markov chain associated to P .

Similarly, let S be the symmetric part of the operator L on L 2(μ). S is the generator of a Markov process {η s(t):t≥0} with holding rates λ s and jump rates R s given by λ s(η)=λ(η), R s(η,ξ)=(1/2){R(η,ξ)+R (η,ξ)}, so that μ(η)R s(η,ξ)=μ(ξ)R s(ξ,η).

Let P η , \(\mathbf{P}^{*}_{\eta}\), \(\mathbf{P}^{s}_{\eta}\), ηE, be the probability measure on the path space D(ℝ+,E) of right continuous trajectories with left limits induced by the Markov process {η(t):t≥0}, {η (t):t≥0}, {η s(t):t≥0} starting from η, respectively. Expectation with respect to P η , \(\mathbf{P}^{*}_{\eta}\), \(\mathbf{P}^{s}_{\eta }\) are denoted by E η , \(\mathbf{E}^{*}_{\eta}\), \(\mathbf{E}^{s}_{\eta}\), respectively.

Denote by H A (resp. \(H^{+}_{A}\)), AE, the hitting time of (resp. return time to) the set A:

Let F be a proper subset of E. Denote by \(\{\mathcal{T}_{t} : t\ge 0\}\) the time spent on the set F by the process η(s) in the time interval [0,t]:

$$\mathcal{T}_{t} := \int_0^t \mathbf{1} \bigl\{\eta(s) \in F\bigr\} ds . $$

Notice that \(\mathcal{T}_{t}\in\mathbb{R}_{+}\), P η -a.s. for every ηE and t≥0. Denote by \(\{\mathcal{S}_{t} : t\ge0\}\) the generalized inverse of \(\mathcal{T}_{t}\):

$$\mathcal{S}_t := \sup\{s\ge0 : \mathcal{T}_s \le t \} . $$

Since {η(t):t≥0} is irreducible and recurrent, \(\lim_{t\to \infty}\mathcal{T}_{t} = \infty\), P η -a.s. for every ηE. Therefore, the random path {η F(t):t≥0}, given by \(\eta^{F}(t) = \eta(\mathcal{S}_{t})\), is P η -a.s. well defined for all ηE and takes value in the set F. We call the process {η F(t):t≥0} the trace of {η(t):t≥0} on the set F.

Denote by R F(η,ξ) the jump rates of the trace process {η F(t):t≥0}. By Propositions 6.1 and 6.3 in [2], {η F(t):t≥0} is an irreducible, recurrent strong Markov process whose invariant measure μ F is given by

$$ \mu^F(\xi) = \frac{1}{\mu(F)} \mu(\xi) ,\quad\xi\in F . $$
(A.5)

For each pair A,B of disjoint subsets of F, denote by r F (A,B) the average rate at which the trace process jumps from A to B:

$$ r_F(A,B) := \frac{1}{\mu(A)} \sum_{\eta\in A} \mu(\eta) \sum_{\xi\in B} R^F(\eta,\xi) . $$

By [2, Proposition 6.1],

$$ r_F(A,B) = \frac{1}{\mu(A)} \sum _{\eta\in A} M(\eta) \mathbf{P}_{\eta} \bigl[ H^+_F = H^+_B \bigr] . $$
(A.6)

We shall refer to r F (⋅,⋅) as the mean set rates associated to the trace process.

Recall from [11] that the capacity between two disjoint subsets A, B of E, denoted by \(\operatorname{cap}(A,B)\), is defined as

$$ \operatorname{cap}(A,B) := \sum_{\eta\in A} M( \eta) \mathbf{P}_{\eta} \bigl[ H^+_B <H^+_{A} \bigr] . $$
(A.7)

Hence, by (A.6) for any two disjoint subsets A, B of E,

$$ \operatorname{cap}(A,B) = \mu(A) r_{A\cup B} (A,B) . $$
(A.8)

Let \(\operatorname{cap}^{*} (A,B)\), \(\operatorname{cap}^{s}(A,B)\) be the capacity between two disjoint subsets A, B of E for the adjoint, symmetric process, respectively. By Eq. (2.4) and Lemmata 2.3, 2.5 in [11],

$$ \operatorname{cap}^* (A,B) = \operatorname{cap}(B,A) = \operatorname {cap}(A,B) , \qquad \operatorname{cap}^s(A,B) \le \operatorname{cap}(A,B) . $$
(A.9)

Denote by \(\operatorname{cap}_{F}\) the capacity with respect to the trace process η F(t). By the proof of [2, Lemma 6.9], for every subset A, B of F, AB=∅,

$$ \mu(F) \operatorname{cap}_F(A,B) = \operatorname{cap}(A,B) . $$
(A.10)

Next result presents an identity between the capacities \(\operatorname {cap}^{s}(A,B)\) and \(\operatorname{cap}(A,B)\), somehow surprising in view of inequality (A.9).

Lemma A.1

Let A, B, C be three disjoint subsets of E. Then,

Proof

Taking F=ABC in (A.10), we may assume that A, B, C forms a partition of E. In this case, since \(\operatorname {cap}(C, A\cup B) = \operatorname{cap}(A\cup B, C)\) and since E=ABC, by (A.8) the left hand side of the previous equation is equal to

where R(ξ,D)=∑ ζD R(ξ,ζ). Performing the computations backward with R s in place of R we conclude the proof. □

We conclude this section proving a relation between expectations of time integrals of functions and capacities. Fix two disjoint subsets A, B of E. Denote by f AB , \(f^{*}_{AB}:E \to\mathbb{R}\) the harmonic functions defined as

$$ f_{AB}(\eta) := \mathbf{P}_{\eta} [ H_{A} < H_B ] , \qquad f^*_{AB}(\eta) := \mathbf{P}^*_{\eta} [ H_{A} < H_B ] . $$

An elementary computation shows that f AB solves the equation

$$ \begin{cases} (L f)(\eta) =0 & \eta\in E\setminus(A\cup B) , \\ f(\eta) = 1 & \eta\in A , \\ f(\eta) = 0 & \eta\in B \end{cases} $$
(A.11)

and that \(f^{*}_{AB}\) solves the same equation with the adjoint L replacing L. Clearly, we may replace the generator L by the operator IP in the above equation, and (A.11) has a unique solution in L 2(M) given by f AB .

Define the harmonic measure ν AB , \(\nu^{*}_{AB}\) on A as

$$\nu_{AB}(\eta) = \frac{M(\eta) \mathbf{P}_{\eta} [ H^+_B <H^+_{A} ] }{\operatorname {cap}(A,B)} , \qquad\nu^*_{AB}(\eta) = \frac{M(\eta) \mathbf{P}^*_{\eta} [ H^+_B <H^+_{A} ] }{\operatorname {cap}^*(A,B)} \quad\eta\in A . $$

Denote by \(\mathbf{E}_{\nu_{AB}}\) the expectation associated to the Markov process {η(t):t≥0} with initial distribution ν AB . Next result is the generalization for non-reversible dynamics of [2, Proposition 6.10].

Proposition A.2

Fix two disjoint subsets A, B of E. Let g:E→ℝ be a μ-integrable function. Then,

$$ \mathbf{E}_{\nu^*_{AB}} \biggl[ \int_0^{H_B} g\bigl(\eta(t)\bigr) dt \biggr] = \frac{\langle g , f^*_{AB}\rangle_{\mu} }{ \operatorname{cap}(A,B)} , $$
(A.12)

where 〈⋅,⋅〉 μ represents the scalar product in L 2(μ).

Proof

We first claim that the proposition holds for indicator functions of states. Fix an arbitrary state ξE. If ξ belongs to B the right hand the left hand side of (A.12) vanish. We may therefore assume that ξ does not belong to B. In this case we may write the expectation appearing in the statement of the lemma as

$$ \mathbf{E}_{\nu^*_{AB}} \Biggl[ \sum_{n=0}^{\mathbb{H}_B -1} \frac{e_n}{\lambda(\xi)} \mathbf{1}\{ Y_n=\xi\} \Biggr] , $$

where {Y n :n≥0} is the discrete time embedded Markov chain, {e n :n≥0} is a sequence of i.i.d. mean one exponential random variables independent of the jump chain {Y n :n≥0}, and ℍ B the hitting time of the set B for the discrete time Markov chain Y n . By the Markov property and by definition of the harmonic measure \(\nu^{*}_{AB}\), this expression is equal to

We may replace the hitting time and the return time H B , \(H^{+}_{A}\) by the respective times ℍ B , \(\mathbb{H}^{+}_{A}\) for the discrete chain. On the other hand, since η and ξ do not belong to B, the event {Y 0=η,Y n =ξ,n<ℍ B } represents all paths that started from η, reached ξ at time n without passing through B. In particular, by the detailed balanced relations between the process and its adjoint, \(M(\eta) \mathbf{P}_{\eta} [ Y_{n}=\xi , n< \mathbb{H}_{B}] = M(\xi) \mathbf{P}^{*}_{\xi} [ Y_{n}=\eta , n< \mathbb{H}_{B}]\) and the last sum becomes

where we used the Markov property in the last step. In this formula {θ k :k≥1} stands for the group of discrete time shift. Summing over η the sum can be written as

$$ \frac{M(\xi)}{\lambda(\xi) \operatorname{cap}^*(A,B)} \sum_{n\ge0} \mathbf{P}^*_{\xi} \bigl[ Y_n \in A , n< \mathbb{H}_B , \mathbb{H}_B \circ\theta_n < \mathbb{H}^+_A \circ\theta_n \bigr]. $$

The set inside the probability represents the event that the process Y k visits A before visiting B and that its last visit to A before reaching B occurs at time n. Hence, since M(ξ)=λ(ξ)μ(ξ), since by (A.9) \(\operatorname {cap}^{*}(A,B) = \operatorname{cap}(A,B)\) and since g is the indicator of the state ξ, summing over n we get that the previous expression is equal to

$$ \frac{1}{\operatorname{cap}^*(A,B)} \mu(\xi) \mathbf{P}^*_{\xi} [ \mathbb{H}_A < \mathbb{H}_B ] = \frac{\langle g , f^*_{AB}\rangle_{\mu}}{ \operatorname{cap}(A,B)} \cdot $$

By linearity and the monotone convergence theorem we get the desired result for positive and then μ-integrable functions. □

In the particular case where A={η} for ηB we have that

$$ \mathbf{E}_{\eta} \biggl[\, \int_0^{H_B} g\bigl(\eta(s)\bigr) ds \biggr] = \frac{ \langle g , f^*_{\{\eta\} B} \rangle_{\mu}}{ \operatorname{cap}(\{\eta\},B)} $$
(A.13)

for any μ-integrable function g.

Let S be a finite set, let π={A x:xS} be a partition of E, and let ξ x be a state in A x for each xS. For each μ-integrable function g denote by 〈g|π μ :E→ℝ the conditional expectation of g, under μ, given the σ-algebra generated by π:

$$\langle g|\pi\rangle_{\mu} = \sum_{x\in S} \frac{\langle g \mathbf {1} \{A^x\} \rangle_{\mu}}{\mu(A^x)} \mathbf{1} \bigl\{A^x\bigr\} . $$

For each xS, let

$$\operatorname{cap}(\xi_x) := \inf_{\eta\in A^x\setminus\{\xi_x\}} \operatorname{cap} \bigl(\{\eta\}, \{\xi_x\}\bigr) . $$

The next result shows that if the process thermalizes quickly in each set of the partition, we may replace time averages of a bounded function by time averages of the conditional expectation. This statement plays a key role in the investigation of metastability. It assumes, however, the existence of an attractor. Its reversible version is a combination of [2, Corollary 6.5 and Lemma 6.11].

Corollary A.3

Let g:E→ℝ be a μ-integrable function. Then, for every t>0,

$$ \sup_{\eta\in E} \bigg\vert\mathbf{E}_{\eta} \biggl[ \int _0^t \bigl\{ g-\langle g | \pi \rangle_{\mu} \bigr\} \bigl(\eta(s)\bigr) ds \biggr] \bigg\vert \le 4 \sum _{x\in S} \frac{ \langle |g| \mathbf{1} \{A^x\} \rangle_{\mu}}{\operatorname {cap}(\xi_x)} , $$

where |g|(η)=|g(η)| for all η in E.

The proof of this result follows from [2, Corollary 6.5], formula (A.13) and the fact that \(f^{*}_{AB}\) is bounded by one.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Beltrán, J., Landim, C. Tunneling and Metastability of Continuous Time Markov Chains II, the Nonreversible Case. J Stat Phys 149, 598–618 (2012). https://doi.org/10.1007/s10955-012-0617-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10955-012-0617-4

Keywords

Navigation