Equilibrium Fluctuations for a One-Dimensional Interface in the Solid on Solid Approximation

. An unbounded one-dimensional solid-on-solid model with integer heights is studied. Unbounded here means that there is no a priori restrictions on the discrete gradient of the interface. The interaction Hamiltonian of the interface is given by a ﬂnite range part, proportional to the sum of height diﬁerences, plus a part of exponentially decaying long range potentials. The evolution of the interface is a reversible Markov process. We prove that if this system is started in the center of a box of size L after a time of order L 3 it reaches, with a very large probability, the top or the bottom of the box.


Introduction
The rigorous analysis of Glauber dynamics for classical spin systems when the inverse temperature β is such that the static system does not undergo phase transition in the thermodynamic limit has been in the last years the argument of many important works. In particular we refer to [12], [8], [9] and references in these papers.
A natural question is what happens when the thermodynamic parameters are such that there is a phase transition. To be concrete consider the stochastic ferromagnetic Ising model in Λ L ≡ [1, L] × [1, L] ∩ Z 2 in absence of external field with free boundary conditions and let us suppose that the inverse temperature β is much larger than the critical one. Then, as well known (see [4]), any associated infinite volume (i.e. L = +∞) Glauber dynamics is not ergodic. However, for L < +∞, every associated Glauber dynamics is ergodic, because it is an irreducible finite-state Markov chain. The problem of how this system behaves at the equilibrium is discussed in [6]. In that paper is proved that the system fluctuates between the "+" phase and the "−" on two distinct time scales. In a first time interval the system creates a layer of the opposite phase, separated from the initial one by a one-dimensional interface. In a second time interval the interface moves until the new phase invades the system. The time the system spends for this transition is exponential in L. In this paper we will study the motion of the interface in the solid-on-solid (SOS) approximation. In particular we will show that after the formation of the interface, a time of order L 3 is sufficient to reach the opposite phase.
The SOS model studied here is a one-dimensional random interface (or surface) with integer heights. There is no restriction on the discrete gradient of the surface (unbounded SOS), thus the configuration space is Z L . The interaction Hamiltonian of the interface is given by the usual energy proportional to the sum of height differences plus a part of exponentially decaying potential which mimics the long-range dependence that the interface feels due to the surrounding bulk phases. This long-range interaction is small in the regime studied, but the potentials involved are of unbounded range, so it is not completely trivial to handle. We restrict the interface to stay in a finite box [1, L] × [−M, M ]. This gives (see Section 2 for more details) the Gibbs measure on Z L .
The evolution of the interface is described by a reversible Markov process with generator where the jump rates are bounded and such that only transitions of the form (φ 1 , . . . , φ L ) → (φ 1 , . . . , φ k ± 1, . . . φ L ), for some k = 1, . . . , L, are allowed. We study this process for M = L/2, i.e. in a "box" of size L and we prove, in a sense given precisely by Theorem 2.1, that if the interface is started in the center of the box then in a time of order L 3 it reaches the bottom or the top of the box. The technique we use to get this result is a mix of analytical and probabilistic tools. In fact it is standard to obtain estimates on the exit time distribution of a reversible Markov process from a region if one has an estimate from below of the spectral gap of its generator with Dirichlet boundary conditions. We will not give bounds on the spectral gap of G L/2 L but on the generator of an auxiliary simpler process and we conclude with a coupling argument.
This paper completes the study of the asymptotic properties of the solid-on-solid model started in [10] where a similar model, in the presence of boundary conditions, is studied.
Acknowledgments: I would like to thank Fabio Martinelli who posed this problem to me and helped me with many constructive discussions.

Notation and Results
Our sample space is Ω L ≡ Z L for fixed L ∈ N. Configurations, i.e. elements of the sample space will be denoted by Greek letters, e.g. φ = (φ 1 , . . . , φ L ) ∈ Ω L . Given M ∈ N ∪ {+∞}, and β > 0 one defines the energy associated with the configuration φ ∈ Ω L as: Here H L is a local interaction: is a long range interaction that will be defined below. Consider the lattices Z 2 and Z 2 * ≡ (1/2, 1/2) + Z 2 as graphs embedded in R 2 equipped with the usual Euclidean metric denoted with dist (·, ·): For every φ ∈ Ω L the contour associated with φ is the subset of R 2 defined by: For A ⊂ R 2 and p ∈ R 2 , the distance of x from A is defined as dist (p, A) ≡ inf{dist (x, y) : y ∈ A}. If we denote with ∆(φ) ≡ p * ∈ Z 2 * : dist (p * , Γ(φ)) = 1 2 , or dist (p * , Γ(φ)) = 1 √ 2 the set of sites attached to the contour Γ(φ) and define the long range interaction may be written as: The sum is over all Λ ⊂ V M L connected in the sense of the dual graph Z 2 * . The potential Φ(·, ·) is a function satisfying (see. [2]): i) there existsβ > 0 such that for every β >β we have: for any k > 0 and p * ∈ Z 2 * . Here the sum is over all Λ ⊂ Z 2 * connected and such that Λ p * , m(β) is a positive function such that m(β) → +∞ for β → +∞ and diam (Λ) is the Euclidean diameter of the set Λ. ii) For every p * ∈ Z 2 * Φ(β, Λ + p * ) = Φ(β, Λ). (2.4) It is easy to check that for every β > 0 and M ∈ (0, +∞] there exists finite the partition function: where φ ∞ ≡ max 1≤i≤L |φ i |. Thus it is possible to define on Ω L the Gibbs measure: .
This measure represents the equilibrium of the system. The dynamics of the system is a continuous time Markov chain with values in Ω L and stationary measure µ M L . This process will be defined by means of its generator.
It is simple to prove that there exists a unique Markov process Φ ≡ {Φ(t) : t ≥ 0} with generator Φ is reversible and G M L is negative semidefinite. The absolute value of the largest negative eigenvalue of G M L is denoted by λ 1 (G M L ) and it is called spectral gap of G M L . We will give a direct construction of Φ in Section 4. More precisely (see Proposition 3.1) we will define a measurable space (Ω L , F L ) and a family of probability measures on it {P φ : φ ∈ Ω L } such that: i) for every φ ∈ Ω L the process Φ is a Markov process on (Ω L , F L , P φ ) with generator G ∅,M,W For every measurable set A ⊂ Ω L define the first exit time from A as: The main result of this paper is: Theorem 2.1. Let Φ be the process associated with the generator G L/2 L . Fix α ∈ (0, 1/4), ε ∈ (0, 1/100) and define A ≡ {φ ∈ Ω L : φ ∞ ≤ (1 − ε)L/2}. Then there existsβ > 0 and for every β >β constants K 1 (β), K 2 (β), K 3 (β) and K 4 (β) > 0 such that: for any L > 0.
This result can be read in the following way: starting the interface in a square box of size L, from an initial condition randomly chosen under µ ∅,L/2,W L (·| · ∞ ≤ αL) (i.e. the interface is forced to stay at least αL far away from the top or the bottom of the box), the probability of reaching within εL of the top or the bottom of the box in time bigger than t is exponentially small in t/L 3 .

Remark 2.2.
In what follows we use constants K 1 , K 2 , . . . in the statement of theorems, propositions and so on, while we use constants C 1 , C 2 , . . . inside proofs. The reader should be warned that the use of constants is coherent only inside the same structure. This means, e.g. , that constants which appears in the proof or in the statement of a proposition may differ from constants, with the same name, which appears in the proof or in the statement of a different proposition.

Proof of Main Result
In this section we will prove our main result Theorem 2.1. The technique we will use is the following. We can estimate the first exit time of a reversible Markov process from a region A by bounding from below the spectral gap of the generator of the process. Actually we will not bound the spectral gap of the process Φ. Instead we will estimate the spectral gap of a simpler auxiliary processΦ that in the region A is similar to Φ. Then a coupling argument (Proposition 3.1) concludes the proof.
We begin this section introducing the auxiliary above mentioned process. Fix L ∈ N, β and M > 0 and define on Ω L the probability measurē and (see Section 2) The processΦ is defined by means of its generator on L 2 (Ω L ,μ M L ) ifμ M L (φ) > 0 and ψ = φ ± δ k for some k = 1, . . . , L; 0 otherwise.
It simple to check thatḠ M L is a self-adjoint Markov generator which defines a unique Markov process. Because for φ ∈ A (A is defined in Theorem 2.1) the jump rates ofḠ L/2 L are very close to the jump rates of G L/2 L (see (2.7)), the processesΦ and Φ evolve in a similar way as long as they remains within A. This fact is formally proved in the following result which gives also a direct construction of the processes. Proposition 3.1. It is possible to construct a family of probability spaces (Ω L , F L , P φ,φ ) and a process {(Φ(t),Φ(t)) : t ≥ 0} taking values in Ω L × Ω L , with P φ,φ (Φ(0) = φ,Φ(0) =φ) = 1 for any (φ,φ) ∈ Ω L × Ω L , and such that: such that for every φ ∈ A: This proposition will be proved in Section 4. The advantage in considering the processΦ instead of Φ is that the first one is simpler to study because it has no interaction with the top and the bottom of the box V M L . In particular in Section 5 we will prove the following result on the spectral gap λ 1 (Ḡ M L ): There existsβ > 0 such that for every β ≥β there exists K 1 (β) > 0, so that: for every L and M > 0.
We are now in a position to prove Theorem 2.1.
Proof of Theorem 2.1.
In this proof we simplify notation writing G ≡ G Recall that σ was defined in Proposition 3.1 as the first time such thatΦ(σ) = Φ(σ) and suppose that φ ∈ A. Then for any t > 0 while for any s > 0 In conclusion for any t, s > 0 where α ∈ (0, 1/4). By (3.6) we obtain: We are going to bound from above the first term on the right hand side of (3.7). We will use a Markov process with killing (see [11]). The Dirichlet form associated with the generatorḠ is the positive-semidefinite bilinear formḠ where < ·, · > L 2 (Ω L ,μ) is the scalar product in L 2 (Ω L ,μ). Becauseμ(A) > 0 it is possible to define the positive semidefinite bilinear formḠ Standard functional analysis methods shows that there exists a unique positive-semidefinite selfadjoint (in L 2 (A,μ)) operatorḠ A such that: This operator has a probabilistic interpretation, it is the generator of a process which evolves according toḠ as long as it stays within A, but is killed when it tries to jump outside A (see [11]).
The semigroup e tḠ A generated byḠ A is sub-stochastic and we have: for every f ∈ L 2 (A,μ). In particular taking f ≡ 1 we obtain: Thus: Spectral theorem has been used in the last line. The spectral gap λ 1 (Ḡ A ) is characterized by the following variational property: . This relation and (3.8) imply: By Proposition 3.2 we know that λ 1 (Ḡ) ≥ C 1 (β)L −3 , so (3.9) yields: .
We claim that there exists C 2 (β) > 0 such that: This simple technical bound is proved in the appendix (see Lemma A1.1). In conclusion: Taking s = L 4 by (3.4) we obtain: i.e. (2.9).

The Coupling
In this section we will construct explicitly a stochastic coupling between Φ andΦ, in particular we will prove Proposition 3.1. The technique we use is an application of the so called basic coupling. This is a coupling between jump processes such that the processes jump together as long as possible, considering the constraint they have to jump with their own jump rates. Because the jump rates of Φ andΦ are very close, when they are in A = {φ ∈ Ω L : φ ∞ ≤ (1 − ε)L/2}, the two processes will evolve identically for a long time.
The first step in the construction of the coupling is to show that the jump rates of Φ andΦ are close in A.
To simplify we adopt the notation of the last section by writingc ≡c for every φ ∈ A. Here and later there exists a constant C 1 > 0 such that diam (Λ) ≥ εC 1 L. So we can use condition (2.3) to bound the sum on the right hand side of (4.3): ¿From this relation and (4.2) we have (4.1).
We can now prove the main result of this section. Proof of Proposition 3.1.
We use the basic coupling. To any site k = 1, . . . , L we associate two independent Poisson processes, each one with rate c max ≡ sup φ,ψ {c(φ, ψ),c(φ, ψ)}. We will denote these processes {N + k,t : t ≥ 0} and {N − k,t : t ≥ 0} while the arrival times of each process are denoted by {τ + k,n : n ∈ N} and {τ − k,n : n ∈ N} respectively. Assume that the Poisson processes associated to different sites are also mutually independent. We say that at each point in the space-time of the form (k, τ + k,n ) there is a "+" mark and that at each point of the form (k, τ − k,n ) there is a "−" mark. Next we associate to each arrival time τ ± k,n a random variable U ± k,n with uniform distribution on [0, 1]. All these random variables are assumed to be independent among themselves and independent from the previously introduced Poisson processes. Obviously there exists a probability space such that all these objects are defined. We have to say now how the various processes are constructed on this space. The process Φ (resp.Φ) is defined in the following manner. We know that almost surely the arrival times τ * k,n , k = 1, . . . , L, n ∈ N, * = ± are all distinct. We update the state of the process each time there is a mark at some k = 1, . . . , L according to the following rule.
• If the mark that we are considering is at the point (k, τ * k,n ), with * = ±, and the configuration of Φ (resp. ofΦ) immediately before time τ * k,n was φ, (resp.φ) then the configuration immediately after τ * k,n of Φ (resp. ofΦ) will be φ ± δ k (resp.φ ± δ k ) if an only if Else the configuration remains the same. It is easy to check that this construction satisfies condition i) of the proposition. It remains to prove (3.4).
. This process counts the number of possible updating of the processes Φ andΦ in the interval [0, t]. It is clear that {N t : t ≥ 0} is a Poisson process with rate λ ≡ 2Lc max . For φ ∈ A we have: To bound from above P φ,φ (σ ≤ t, σ ≤τ |N t = n) observe that if Φ andΦ are initially in the same state φ ∈ A and if N t = n, i.e. there were n possible updating in [0, t], then it possible that σ ∈ [0, t] if and only if for some i = 1, . . . , n, ψ ∈ Ω L and k = 1, . . . , L happens that: The probability of this event, for fixed i = 1, . . . , n, ψ ∈ Ω L and k = 1, . . . , L, is: Moreover because σ ≤τ we have: By Lemma 4.1 we have sup k=1,...,L ψ∈A This estimate together with (4.4) gives (3.4).

Spectral Gap forḠ M L
In this section we will prove Proposition 3.2. The strategy of the proof is the following. By a simple change of variables the Glauber dynamics associated withḠ M L becomes a Kawasaki type dynamics, while the measureμ M L becomes, in the new variables, a product measure perturbed by an infinite range interaction. This interaction term is small if β is large. Without this perturbation term the result is very simple to prove. The presence of this extra term requires a little extra work.
The distribution of η is easily calculated as: .
Observe that η(φ) defined in (5.1) is the vector of the discrete derivatives of the configuration φ ∈ Ω L . The study of the SOS interface can be carried out using the variables φ or η indifferently. We will use the last one in the sequel. Letν M L be the distribution of η, i.e. .

(5.2)
The expected value operator with respect toν M L will be denoted with E M L (·), while the covariance form will be denoted by E M L (·, ·) i.e.
where f, g ∈ L 2 (ν M L ). On the same Hilbert space is defined the quadratic form: where for any h, k = 1, . . . , . We will use this form to estimate the spectral gap ofḠ M L : Lemma 5.1. There exists two constants K 1 (β) and K 2 (β) such that .
Using the fact thatḠ M L is self adjoint in L 2 (μ M L ) it simple to check that If we recall the definition of the jump rates (3. 3) a simple calculation shows that there exists C 1 (β) and C 2 (β) such that for every φ ∈ Ω L and L, M > 0. This means thatḠ M L (f, f ) can be bounded from below (from above) by C 1 /2 (by C 2 /2) times Now we use the change of variable φ = T η to obtain: , inf , by the variational characterization of the spectral gap and (5.4) we have (5.3).
We can use this lemma to prove Proposition 3.2. In fact by (5.3) it is easy to show that (3.5) is equivalent to the existence of C 1 (β) > 0 such that the Poincaré inequality (5.5) holds for every f ∈ L 2 (ν M L ). The key step of the proof of this inequality is contained in the following result.
Proposition 5.2. There existsβ > 0 such that for every β ≥β it is possible to find K 1 (β) > 0 such that This proposition shows the perturbative approach of the proof. In fact if W ∞ L = 0 the measurē ν M L (·|η 1 ) is a product measure. In this case, Proposition 5.2 says that there exists a uniformly positive spectral gap for a random walk in Z L−1 in which each component of the walk is independent from the others. It is well known that this gap exists if each component exhibits by itself a uniformly positive spectral gap, and the existence of this one site spectral gap is easily proved. Because for β → +∞ the perturbation W ∞ L goes to 0, the result should be true also for large values of β. Before proving the key result Proposition 5.2 we want to show how, from this result (5.5) follows. Proof of Proposition 3.2.
Fix f ∈ L 2 (ν M L ), a simple calculation yields: The first term on the right hand side of this equation can be bounded using Proposition 5.2. For the second term notice that η 1 is uniformly distributed in [−M, M ] ∩ Z. It is well known (see [1] for example) that this implies that there exists C 1 (β) > 0 such that the one-site Poincaré inequality holds for every g = g(η 1 ) ∈ L 2 (ν M L ) and M > 0. In particular this formula is true for g( In conclusion we obtain We can use this and (5.6) in the left hand side of (5.7) to get: Now notice that and that the change of coordinates η → η + δ j has bounded Jacobian. Thus (5.8) gives: which is the same as (5.5).
The remaining part of this section is devoted to the proof of Proposition 5.2. It is convenient to introduce some extra notation. Recall that η 1 is independent from η 2 , . . . , η L and that W ∞ L (β, T (η)) does not depend on η 1 . This implies that .
Define on Ω L the probability measurē and denote with E L (·) the expected value with respect to this measure. We restate Proposition 5.2 as Proposition 5.3. There existsβ > 0 such that for every β ≥β it is possible to find K 1 (β) > 0 such that for any f ∈ L 2 (ν L ) and L > 0.
We will prove this result using the martingale approach outlined in [5]. Define the subsets α j of N in the following way For every j = 1, . . . , L the restriction of η to the set α j is denoted by η α j ≡ (η j , . . . , η L ). If we define f j ≡ E L (f |η α j ) it is simple to check that: On the right hand side of this formula there is a sum of expected value of conditional variances. Notice that the random variable f j , by definition, depends only on η α j . So if η α j+1 is fixed, it depends only on η j (we will say that f is local in j). This means that each of the variances on the right hand side of (5.10) is the variance of a local function.
The method we will use to prove (5.9) consists of two steps The first step is to show that the marginal in η j of the measureν L (·|η α j+1 ) exhibits a positive spectral gap uniformly in L > 0. Lemma 5.4. There existsβ > 0 such that for every β > 0 there exists K 1 (β) > 0 so that for every L > 0 and f ∈ L 2 (ν L (·|η α j )) local in j.
Because of this lemma (5.10) becomes The second step is to show that the right hand side of (5.12) can be bounded from above by the correct quadratic form: for every β ≥β, L > 0 and f ∈ L 2 (ν).
If we use Lemma 5.5 in (5.12) we obtain immediately (5.9). In order to prove Lemma 5.4 and Lemma 5.5 we need a preliminary result.
In order to prove thatŴ c j (η) does not depend on η 1 , . . . , η j it suffices to show that for every h ∈ Z and k = 1, . . . , j. Define This map is bijective for every k = 1, . . . , j. Furthermore because of the translation invariance (2.4) of Φ(β, ·) we have: ¿From Step 1 we obtain and:ν for every i > j. By these inequalities we obtain that to prove (5.14) and (5.15) we have only to show that for every i ≥ j Then:Ŵ Proof of Step 2.
Let Λ ∈ S j (η) be such that it intersects ∆(η) on the left of i. Then it also intersects S j (η + δ i ) in the same points. On the contrary if Λ ∈ S j (η) intersects ∆(η + δ i ) on the left of i then it intersects S j (η). In conclusion By this relation we obtain that we can clear from the differencê all the terms Φ(β, Λ) such that Λ intersects ∆(η), or ∆(η + δ i ), on the left of i. It follows that the sums are actually only on the Λ which neither intersects ∆(η) on the left of i nor intersects ∆(η + δ i ) on the left of i. Because these Λ have to intersect ∆(η), the intersection is on the right of i. This proves (5.18).
For any S ⊂ Z 2 * we will say that p ∈ S is +unstable if: where e y = (0, 1) ∈ R 2 . Similarly we will say that p ∈ S is −unstable if: The classes of points +unstable and −unstable of the set S will be denoted respectively with I + (S) and I + (S).
This fact can be easily checked by using geometric considerations.
It follows that: Where β is large enough and (2.3) has been used. ¿From this estimate we obtain (5.17) that, as we noticed before, implies (5.14) and (5.15).
It follows immediately from [3] and [10]. Now we turn to the proof of Lemma 5.5. We need a technical result Lemma 5.7. For every i = 2, . . . , L we have: Proof.
Proof of Lemma 5.5.
We borrow the basic idea of the proof from [9]. We will show that for β large enough for every f ∈ L 2 (ν L ). Fix f ∈ L 2 (ν L ) and i > 1. By Lemma 5.7 (5.24) We have to estimate the second term on the right hand side of this relation. By Schwartz inequality and Lemma 5.6 we obtain where ε = ε(β) → 0 for β → +∞. Thus if β is large enough (5.25) It simple to check that So by (5.11) we know that, for β large enough, there exists C 1 > 0 so that: By using this bound in (5.25) we get Exchanging the sums on the right hand side of (5.26) we have

This implies
and by (5.24): Taking expected value on both sides of this relation and recalling that the change of variable η → η − δ i has a bounded Jacobian (see Lemma 5.6) we obtain We sum this relation for i = 2, . . . , L. An elementary computation gives Recalling that ∂ + i f 1 = ∂ + i f = E L (∂ + i f |η α 1 ), this implies: To conclude the proof of (5.23) we chooseβ > 0 such that β ≥β implies C 5 ε 2 (β) < 1/2. Proof.
An elementary calculation shows that: To complete the proof we have to boundμ L/2 L (B) from below. We refer to [2] to prove that there exists C 2 (β) > 0 such thatμ