Abstract
We proposed in Beltrán and Landim (J. Stat. Phys. 140:1065–1114, 2010) a new approach to prove the metastable behavior of reversible dynamics based on potential theory and local ergodicity. In this article we extend this theory to nonreversible dynamics based on the Dirichlet principle proved in Gaudillière and Landim (arXiv:1111.2445, 2011). We also include in this article the proof of the metastability of a class of birth and death chains.
Similar content being viewed by others
References
Beltrán, J., Jara, M., Landim, C.: Tunneling of the condensate in asymmetric zero range processes. (2012, in preparation)
Beltrán, J., Landim, C.: Tunneling and metastability of continuous time Markov chains. J. Stat. Phys. 140, 1065–1114 (2010)
Beltrán, J., Landim, C.: Metastability of reversible condensed zero range processes on a finite set. Probab. Theory Relat. Fields 152, 781–807 (2012)
Beltrán, J., Landim, C.: Metastability of reversible finite state Markov processes. Stoch. Process. Appl. 121, 1633–1677 (2011)
Beltrán, J., Landim, C.: Tunneling of the Kawasaki dynamics at low temperatures in two dimensions. arXiv:1109.2776 (2011)
Bovier, A., Eckhoff, M., Gayrard, V., Klein, M.: Metastability in stochastic dynamics of disordered mean field models. Probab. Theory Relat. Fields 119, 99–161 (2001)
Bovier, A., Eckhoff, M., Gayrard, V., Klein, M.: Metastability and low lying spectra in reversible Markov chains. Commun. Math. Phys. 228, 219–255 (2002)
Cassandro, M., Galves, A., Olivieri, E., Vares, M.E.: Metastable behavior of stochastic dynamics: a pathwise approach. J. Stat. Phys. 35, 603–634 (1984)
Freedman, D.: Markov Chains. Holden-Day, San Francisco (1971)
Gaudillière, A.: Condenser physics applied to Markov chains: a brief introduction to potential theory. Online available at http://arxiv.org/abs/0901.3053
Gaudillière, A., Landim, C.: A Dirichlet principle for non reversible Markov chains and some recurrence theorems. arXiv:1111.2445 (2011)
Jara, M., Landim, C., Teixeira, A.: Quenched scaling limits of trap models. Ann. Probab. 39, 176–223 (2011)
Jara, M., Landim, C., Teixeira, A.: Universality of trap models in the ergodic time scale. Online available at http://arxiv.org/abs/1208.5675 (2012)
Landim, C.: Metastability for a non-reversible dynamics: the evolution of the condensate in totally asymmetric zero range processes. arXiv:1204.5987
Lebowitz, J.L., Penrose, O.: Rigorous treatment of the van der Waals–Maxwell theory of the liquid–vapor transition. J. Math. Phys. 7, 98–113 (1966)
Norris, J.R.: Markov Chains. Cambridge University Press, Cambridge (1997)
Olivieri, E., Vares, M.E.: Large Deviations and Metastability. Encyclopedia of Mathematics and Its Applications, vol. 100. Cambridge University Press, Cambridge, (2005)
Acknowledgements
The authors would like to thank the anonymous referees for their careful reading which helped to improve the presentation of this article.
Author information
Authors and Affiliations
Corresponding author
Appendix: Potential Theory for Positive Recurrent Processes
Appendix: Potential Theory for Positive Recurrent Processes
We state in this section several properties of continuous time Markov chains used throughout the article. Consider a countable set E and a matrix R:E×E→ℝ such that R(η,ξ)≥0, η≠ξ, −∞<R(η,η)<0, ∑ ξ R(η,ξ)=0, η∈E. Let λ(η)=−R(η,η). Since λ(η) is finite and strictly positive, we may define the transition probabilities {p(η,ξ):η,ξ∈E} as
and p(η,η)=0 for η∈E. We assume throughout this section that {p(η,ξ):η,ξ∈E} are the transition probabilities of an irreducible and recurrent discrete time Markov chain, denoted by Y={Y n :n≥0}.
Let {η(t):t≥0} be the unique strong Markov process associated to the rates R(η,ξ). We shall refer to R(⋅,⋅), λ(⋅) and p(⋅,⋅) as the transition rates, holding rates and jump probabilities of {η(t):t≥0}, respectively. Since the jump chain Y is irreducible and recurrent, so is the corresponding Markov process {η(t):t≥0}. We shall assume throughout this section that η(t) is positive recurrent. In consequence, η(t) has a unique invariant probability measure μ. Moreover,
is an invariant measure for the jump chain Y, unique up to scalar multiples. The proofs of these assertions can be found in Sects. 3.4 and 3.5 of [16]. We assume furthermore that the holding rates are summable with respect to μ:
so that M is a finite measure. Assumption (A.3) reduces the potential theory of continuous time Markov chains to the potential theory of discrete time Markov chains.
Let L 2(μ), L 2(M) be the space of square integrable functions f:E→ℝ endowed with the usual scalar product 〈f,g〉 m =∑ η∈E f(η)g(η)m(η), with m=μ, M, respectively. Denote by P the bounded operator in L 2(M) defined by
for f∈L 2(M), and by L the generator of the Markov process {η(t):t≥0}. Thus, for every finitely supported function f:E→ℝ,
Let L ∗ be the adjoint of the operator L on L 2(μ). L ∗ is the generator of a Markov process {η ∗(t):t≥0} with holding rates λ ∗ and jump rates R ∗ given by λ ∗(η)=λ(η), R ∗(η,ξ)=μ(ξ)R(ξ,η)/μ(η), so that μ(η)R ∗(η,ξ)=μ(ξ)R(ξ,η). Let P ∗ be the adjoint of P in L 2(M). Clearly, the bounded operator P ∗ is given by (A.4) with p ∗ in place of p. Denote by \(\{Y^{*}_{n} : n\ge0\}\) the discrete time Markov chain associated to P ∗.
Similarly, let S be the symmetric part of the operator L on L 2(μ). S is the generator of a Markov process {η s(t):t≥0} with holding rates λ s and jump rates R s given by λ s(η)=λ(η), R s(η,ξ)=(1/2){R(η,ξ)+R ∗(η,ξ)}, so that μ(η)R s(η,ξ)=μ(ξ)R s(ξ,η).
Let P η , \(\mathbf{P}^{*}_{\eta}\), \(\mathbf{P}^{s}_{\eta}\), η∈E, be the probability measure on the path space D(ℝ+,E) of right continuous trajectories with left limits induced by the Markov process {η(t):t≥0}, {η ∗(t):t≥0}, {η s(t):t≥0} starting from η, respectively. Expectation with respect to P η , \(\mathbf{P}^{*}_{\eta}\), \(\mathbf{P}^{s}_{\eta }\) are denoted by E η , \(\mathbf{E}^{*}_{\eta}\), \(\mathbf{E}^{s}_{\eta}\), respectively.
Denote by H A (resp. \(H^{+}_{A}\)), A⊆E, the hitting time of (resp. return time to) the set A:
Let F be a proper subset of E. Denote by \(\{\mathcal{T}_{t} : t\ge 0\}\) the time spent on the set F by the process η(s) in the time interval [0,t]:
Notice that \(\mathcal{T}_{t}\in\mathbb{R}_{+}\), P η -a.s. for every η∈E and t≥0. Denote by \(\{\mathcal{S}_{t} : t\ge0\}\) the generalized inverse of \(\mathcal{T}_{t}\):
Since {η(t):t≥0} is irreducible and recurrent, \(\lim_{t\to \infty}\mathcal{T}_{t} = \infty\), P η -a.s. for every η∈E. Therefore, the random path {η F(t):t≥0}, given by \(\eta^{F}(t) = \eta(\mathcal{S}_{t})\), is P η -a.s. well defined for all η∈E and takes value in the set F. We call the process {η F(t):t≥0} the trace of {η(t):t≥0} on the set F.
Denote by R F(η,ξ) the jump rates of the trace process {η F(t):t≥0}. By Propositions 6.1 and 6.3 in [2], {η F(t):t≥0} is an irreducible, recurrent strong Markov process whose invariant measure μ F is given by
For each pair A,B of disjoint subsets of F, denote by r F (A,B) the average rate at which the trace process jumps from A to B:
By [2, Proposition 6.1],
We shall refer to r F (⋅,⋅) as the mean set rates associated to the trace process.
Recall from [11] that the capacity between two disjoint subsets A, B of E, denoted by \(\operatorname{cap}(A,B)\), is defined as
Hence, by (A.6) for any two disjoint subsets A, B of E,
Let \(\operatorname{cap}^{*} (A,B)\), \(\operatorname{cap}^{s}(A,B)\) be the capacity between two disjoint subsets A, B of E for the adjoint, symmetric process, respectively. By Eq. (2.4) and Lemmata 2.3, 2.5 in [11],
Denote by \(\operatorname{cap}_{F}\) the capacity with respect to the trace process η F(t). By the proof of [2, Lemma 6.9], for every subset A, B of F, A∩B=∅,
Next result presents an identity between the capacities \(\operatorname {cap}^{s}(A,B)\) and \(\operatorname{cap}(A,B)\), somehow surprising in view of inequality (A.9).
Lemma A.1
Let A, B, C be three disjoint subsets of E. Then,
Proof
Taking F=A∪B∪C in (A.10), we may assume that A, B, C forms a partition of E. In this case, since \(\operatorname {cap}(C, A\cup B) = \operatorname{cap}(A\cup B, C)\) and since E=A∪B∪C, by (A.8) the left hand side of the previous equation is equal to
where R(ξ,D)=∑ ζ∈D R(ξ,ζ). Performing the computations backward with R s in place of R we conclude the proof. □
We conclude this section proving a relation between expectations of time integrals of functions and capacities. Fix two disjoint subsets A, B of E. Denote by f AB , \(f^{*}_{AB}:E \to\mathbb{R}\) the harmonic functions defined as
An elementary computation shows that f AB solves the equation
and that \(f^{*}_{AB}\) solves the same equation with the adjoint L ∗ replacing L. Clearly, we may replace the generator L by the operator I−P in the above equation, and (A.11) has a unique solution in L 2(M) given by f AB .
Define the harmonic measure ν AB , \(\nu^{*}_{AB}\) on A as
Denote by \(\mathbf{E}_{\nu_{AB}}\) the expectation associated to the Markov process {η(t):t≥0} with initial distribution ν AB . Next result is the generalization for non-reversible dynamics of [2, Proposition 6.10].
Proposition A.2
Fix two disjoint subsets A, B of E. Let g:E→ℝ be a μ-integrable function. Then,
where 〈⋅,⋅〉 μ represents the scalar product in L 2(μ).
Proof
We first claim that the proposition holds for indicator functions of states. Fix an arbitrary state ξ∈E. If ξ belongs to B the right hand the left hand side of (A.12) vanish. We may therefore assume that ξ does not belong to B. In this case we may write the expectation appearing in the statement of the lemma as
where {Y n :n≥0} is the discrete time embedded Markov chain, {e n :n≥0} is a sequence of i.i.d. mean one exponential random variables independent of the jump chain {Y n :n≥0}, and ℍ B the hitting time of the set B for the discrete time Markov chain Y n . By the Markov property and by definition of the harmonic measure \(\nu^{*}_{AB}\), this expression is equal to
We may replace the hitting time and the return time H B , \(H^{+}_{A}\) by the respective times ℍ B , \(\mathbb{H}^{+}_{A}\) for the discrete chain. On the other hand, since η and ξ do not belong to B, the event {Y 0=η,Y n =ξ,n<ℍ B } represents all paths that started from η, reached ξ at time n without passing through B. In particular, by the detailed balanced relations between the process and its adjoint, \(M(\eta) \mathbf{P}_{\eta} [ Y_{n}=\xi , n< \mathbb{H}_{B}] = M(\xi) \mathbf{P}^{*}_{\xi} [ Y_{n}=\eta , n< \mathbb{H}_{B}]\) and the last sum becomes
where we used the Markov property in the last step. In this formula {θ k :k≥1} stands for the group of discrete time shift. Summing over η the sum can be written as
The set inside the probability represents the event that the process Y k visits A before visiting B and that its last visit to A before reaching B occurs at time n. Hence, since M(ξ)=λ(ξ)μ(ξ), since by (A.9) \(\operatorname {cap}^{*}(A,B) = \operatorname{cap}(A,B)\) and since g is the indicator of the state ξ, summing over n we get that the previous expression is equal to
By linearity and the monotone convergence theorem we get the desired result for positive and then μ-integrable functions. □
In the particular case where A={η} for η∉B we have that
for any μ-integrable function g.
Let S be a finite set, let π={A x:x∈S} be a partition of E, and let ξ x be a state in A x for each x∈S. For each μ-integrable function g denote by 〈g|π〉 μ :E→ℝ the conditional expectation of g, under μ, given the σ-algebra generated by π:
For each x∈S, let
The next result shows that if the process thermalizes quickly in each set of the partition, we may replace time averages of a bounded function by time averages of the conditional expectation. This statement plays a key role in the investigation of metastability. It assumes, however, the existence of an attractor. Its reversible version is a combination of [2, Corollary 6.5 and Lemma 6.11].
Corollary A.3
Let g:E→ℝ be a μ-integrable function. Then, for every t>0,
where |g|(η)=|g(η)| for all η in E.
The proof of this result follows from [2, Corollary 6.5], formula (A.13) and the fact that \(f^{*}_{AB}\) is bounded by one.
Rights and permissions
About this article
Cite this article
Beltrán, J., Landim, C. Tunneling and Metastability of Continuous Time Markov Chains II, the Nonreversible Case. J Stat Phys 149, 598–618 (2012). https://doi.org/10.1007/s10955-012-0617-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10955-012-0617-4