Some results about ergodicity in shape for a crystal growth model

We study a crystal growth Markov model proposed by Gates and Westcott (\cite{Kinetics1}, \cite{Kinetics2}). This is an aggregation process where particles are packed in a square lattice accordingly to prescribed deposition rates. This model is parametrized by three values $(\beta_i, i=0,1,2)$ corresponding to depositions on three different types of sites. The main problem is to determine, for the shape of the crystal, when recurrence and when ergodicity do occur. In \cite{AMS} and \cite{MarkovModels} sufficient conditions are given both for ergodicity and transience. We establish some improved conditions and give a precise description of the asymptotic behavior in a special case.


Definitions and first properties
Let n be an integer, n ≥ 2. We consider a set of n aligned sites, each site corresponding to a growing pile of particles. The state of a lamellar crystal (see [5]) is described by a vector x = (x(1), . . . , x(n)) ∈ N n , where the value of x(i) may be thought of as the height of the pile above site i. If 1 ≤ j ≤ n, e j will stand for the unitary vector: e j (i) = δ i,j .
For V j (x) to be well-defined for j = 1 and j = n, we adopt from now on the convention that x(0) = x(n + 1) = 0, unless otherwise specified. This is the so-called zero condition, which amounts to add a leftmost and a rightmost site that stay at height 0 forever. Another natural convention is the periodic condition that consists in deciding that x(0) = x(n) and x(n + 1) = x(1), but we believe that all the results here can be transposed to periodic condition (in the same way as Theorem 1.1 in [3]). We shall also use the infinite condition (resp. the zero-infinite condition), that is x(0) = x(n + 1) = ∞ (resp. x(0) = 0, x(n + 1) = ∞), and anything relative to this condition will be denoted with the superscript ∞ (resp. the superscript 0/∞).
For a configuration x, we define the shape h of x by where ∆ j x = x(j) − x(j + 1), j = 1, . . . , n − 1.
Knowing h is equivalent to knowing x up to vertical translation. It is important to remark that V j (x) only depends on x through h, and V j (h) will denote the value of V j (x) for any x whose shape is h. Let us define, for j = 1, . . . , n, the vector The object of main interest is the process of the shape of X n , that we now define, rather than the process X n itself.
Definition 2. The shape process with n sites and parameter β is defined by H n t = (∆ 1 X n t , . . . , ∆ n−1 X n t ) , where X n is a crystal process with n sites and parameter β. H n is a Markov process on Z n−1 with transition mechanism given by q(h, h + f j ) = β V j (h) , j = 1, . . . , n, q(h, h ) = 0, if h / ∈ {h + f 1 , . . . , h + f n }.
These processes have a basic symmetry property, namely the process (X n t (n), . . . , X n t (1)) has the same distribution as X n , and consequently the process (−∆ n−1 X n , . . . −∆ 1 X n ) has the same distribution as H n . There is a convenient construction of X n , and hence of H n , that we now describe and will later refer as the Poisson construction. As we will see later, the interest of this construction is to yield useful couplings. Let b 0 , b 1 and b 2 be the β k 's ranked in the increasing order. We take a family of Poisson processes (N k,j , 0 ≤ k ≤ 2, 1 ≤ j ≤ n) such that -N k,j has intensity b k , -the triples (N 0,j , N 1,j , N 2,j ) 1≤j≤n are mutually independent, -for any j there exist three processesÑ 0,j ,Ñ 1,j andÑ 2,j , mutually independent, with intensities b 0 , b 1 − b 0 and b 2 − b 1 respectively, such that N 0,j =Ñ 0,j , N 1,j =Ñ 0,j +Ñ 1,j , N 2,j =Ñ 0,j +Ñ 1,j +Ñ 2,j .
We build the process (X n t , t ≥ 0) starting from x 0 letting X n 0 = x 0 , and at any jump time t of some N k,j , It is not hard to check that this process has the Markov property and the desired jump rates. Hence it is a crystal process starting from x 0 with n sites and parameter β.
For any positive function f on N or R + , we write If f also depends on some other variable t, the notation f (x, t) = O t exp (x) means that the same inequality holds with constants C and α being independent of t. We say that the process X n is ergodic in shape, resp. transient in shape, whenever the process H n is ergodic, resp. transient. The notation P x , resp. P h , will stand for the distribution of the trajectory (X n t , t ≥ 0) starting from x, resp. of the trajectory (H n t , t ≥ 0) starting from h. If there is any ambiguity on the parameter β, the notation P β x will be used instead. The null vector will be denoted by 0. This simple model was first described by Gates and Westcott in [1], where attention was focused on the special case β 2 > β 0 , and Under this assumption the process H n with periodic conditions enjoys a remarkable dynamic reversibility property that implies ergodicity, and even allows to derive an exact computation of the invariant distribution. Unfortunately without this assumption on β, there is no such simple way to determine whether ergodicity occurs or not. However we can make a naive remark: since β 0 is the statistic speed of peaks and β 2 is the one of holes, basic intuition says that increasing β 0 should make the shape more irregular, making the process H n more likely to be transient. Conversely, increasing β 2 should make the shape smoother, making the process H n more likely to be recurrent. Gates and Westcott later proved several results about the problem of recurrence in shape for other parameters, by means of Foster criteria with quite simple Lyapunov functions. Theorem 2 in [4] states that for periodic conditions and n ≥ 2, H n is transient if β 2 < β 0 . Ergodicity is shown to hold for and a similar condition for ergodicity is also obtained for a process with a twodimensional grid of sites. Of course when n is large such conditions are very restrictive.
The family (∆ j X n t , t ≥ 0) is said to be exponentially tight if We also say that the family (H n t , t ≥ 0) is exponentially tight if for j = 1, . . . n − 1, (∆ j X n t , t ≥ 0) is exponentially tight. Obviously, exponential tightness of the process (H n t , t ≥ 0) implies that it is ergodic with an invariant distribution having exponential tails. From Theorem 1.2 in [3] we get: Theorem 1. If n ≥ 2 and β 0 < β 1 ≤ β 2 then (H n t , t ≥ 0) is exponentially tight, and hence ergodic. Moreover there exists d n < β 1 such that We point out that this result is actually given for β 0 < β 1 < β 2 but the reader may verify that its proof works exactly the same if we take β 1 = β 2 . We will pick up several ideas of the approach in [3] in order to give weaker conditions for ergodicity in shape. Our notations will be consistent with this reference as much as possible. Before stating our results we begin by defining two useful notions: growth rate and monotonicity.
Proposition 1. Suppose that H n is ergodic and let π n be its invariant distribution. There exists v n > 0 such that for j = 1, . . . , n, almost surely, Moreover, for any j = 1, . . . , n, we have v n = h∈Z n−1 β V j (h) π n (h).
For n = 2, the simplicity of the dynamics allows us to compute the exact value of the growth rate: is a random walk on Z, whose jump rates are given by: Thus the first assertion is straightforward. If β 1 > β 0 , it is easy to check that the probability measure is a reversible measure for this random walk, so it is the invariant distribution of the process. We can then compute: We define the canonical partial order ≤ in an obvious way: for two configurations x, y ∈ N n , we write x ≤ y if The process X n is said to be attractive if for any x ≤ y, there exists a coupling of two processes (X n t , t ≥ 0) and (Y n t , t ≥ 0), with distributions P x and P y , such that almost surely, ∀t ≥ 0, X n t ≤ Y n t . Lemma 1. Let n ≥ 2. If β 0 ≤ β 1 ≤ β 2 , then X n is attractive.
Proof. Let x ≤ y. We consider X n and Y n obtained with the above Poisson construction, with X n 0 = x, Y n 0 = y, both using the same Poisson processes. Suppose that X n t − (j) = Y n t − (j) and N k,j jumps at time t. We then have V j (X n t − ) ≤ V j (Y n t − ). Consequently, if X n · (j) jumps at time t, then so does Y n · (j) and hence the inequality X n (j) ≤ Y n (j) is preserved.
We are now interested in comparing two processes with same initial states, but different numbers of sites, or different parameters. In general it is not true that increasing one of the parameters increases the process himself. However we have a weaker result which is sufficient for our purpose. Lemma 2. Let n ≥ 2 and β ∈]0, +∞[ 3 . If β 0 ≤ β 1 ≤ β 2 and n ≤ m, then there exists a coupling of two processes X n and X m distributed as P β,n x and P β,m x , such that If β and β are such that β k ≤ β for k ≤ , then there exists a coupling of two processes X n t and X n t distributed as P β,n x and P β ,n x , such that Proof. Here again we can use Poisson constructions in such a way that the obtained processes enjoy the desired properties. The details are left to the reader.

Results
As already noticed, X n is transient in shape (for periodic conditions) when β 2 < β 0 . This is not a surprise since this inequality says that peaks grow faster than holes.
Our first Theorem describes more precisely the asymptotic behaviour of the process X n , with zero-condition, under this assumption. It says that almost surely the shape ultimately adopts a comb shape. The exact form of the comb actually depends on the position of β 1 relatively to β 2 and β 0 so we actually establish three analogue results. To illustrate this we show three simulations showing realizations of t −1 X n t for t = 1000 and three different parameters. Before stating the result we need to introduce some further notation. We write H 2 being replaced by H 2,∞ , β 0 by β 1 and β 1 by β 2 . Thus, whenever β 2 > β 1 the process H 2,∞ is ergodic with growth rate v 2,∞ = 2β 1 β 2 /(β 1 + β 2 ). Let E 3 be the set of all n-uples of the form where k ∈ N, In Section 3 we prove: Remark. It is plausible that the almost sure convergence of t −1 X n t holds even without the assumption β 2 < β 0 . For instance when β 0 < β 2 < β 1 our belief, confirmed by computer simulations, is that it is always the case that the n sites ultimately divide in a certain number of blocks (possibly one in the ergodic case) of various widths separated by holes of unit length, each of these blocks being ergodic in shape. If this is true then each site admits an asymptotic speed which is either v k , k being the width of the block containing the site, or β 2 if the site is ultimately a hole. Unfortunately we have not been able to prove this. The next results concern the process with parameters lying in the domain We point out that the three degrees of freedom actually reduce to two. We can indeed assume β 0 = 1 because otherwise we can work with the process (X n t/β 0 , t ≥ 0). Our first result in that direction is an abstract condition for ergodicity. The value of β 0 being fixed from now on, our strategy is to give for each β 1 > β 0 a threshold value of β 2 above which ergodicity holds. The main idea is to compare X n with an auxiliary processX n which is defined as the crystal process with parameters Anything relative to the processX n will be denoted with the symbol ∼. For β 1 > β 0 and n ≥ 2 we definẽ Clearlyd n (β 1 ) ∈ [β 0 , β 1 ]. Moreover it follows from Lemma 2 that d n (β 1 ) is an increasing function of both n and β 1 .
In section 4 we shall prove Theorem 3, and Theorem 4 below, which is an application of Theorem 3.
Theorem 4. Let n ≥ 2 and β ∈ D. The process H k is ergodic for 2 ≤ k ≤ n + 2 if β satisfies one of the following conditions: Moreover H k is ergodic for any k ≥ 2 if β satisfies: Finally in Section 5 we establish that H n is transient for some parameters in D. More precisely we show Before turning to the proofs we briefly comment the interest of the above assertions, with the following diagram in mind. In Theorem 4, condition (a) improves the only sufficient condition for ergodicity in D established so far, namely (2). Condition (b) provides for fixed n a right-side neigbourhood of the set {β : β 0 < β 1 ≤ β 2 } in which ergodicity still holds. Condition (c) is certainly the most important one since it yields a zone of ergodicity that does not depend on the number of sites, and Theorem 5 does the same for transience.

Proof of Theorem 2
The proof of Theorem 2 uses the following technical result.
We assume that there exists p > 0, N ∈ N and some subsets B 1 , . . . , B N and C 1 , . . . , Then for any x ∈ A c , we have P x (T A < +∞) = 1.
Proof. We consider the partition (B 1 , . . . B N ) of the set B given by B 1 = B 1 and We start off by defining inductively an increasing sequence of stopping times. For the well-definedness of this sequence we add an element ∂ to the set E and use the conventions inf ∅ = ∞ and Z ∞ = ∂. Let x / ∈ A. We define and let i 1 be such that Z τ 1 ∈ B i 1 , and otherwise.
For n ≥ 2, we define and let i n be such that Z τn ∈ B in if τ n < ∞, and otherwise.
In this construction the sequence (Z τ 1 , Z τ 1 , . . . , Z τn , Z τ n , . . . ) is such that Z τ 1 / ∈ A, Z τ 1 / ∈ A, Z τ 2 / ∈ A . . . until one of its terms belongs to A, and all the following terms are equal to ∂. Proceeding by induction, the strong Markov property and assumptions (a) and (b) easily yield Letting n go to infinity in this inequality, we get P x (∀t ≥ 0, Z t / ∈ A) = 0.
Proof of Theorem 2 . For any configuration h and i < j, we denote by ∆ i,j (h) = h(i) + · · · + h(j − 1) the height difference between sites i and j. We consider the following subsets of Z n−1 . To simplify notations, inside braces we denote by x any configuration whose shape is h: C i is the set of configurations in which the lower site of the block {i, i + 1} is at the same level as the higher site among the sites neighboring this block (there are two such sites unless i = 1 or i = n − 1). A little moment of thought will convince the reader of the following fact: in any configuration h ∈ (A ∪ B) c , there must be a unique site with maximal height. This will be used several times in this Section. Before turning to the proof, we introduce further notations. If 1 ≤ a ≤ b ≤ n and the crystal process with b − a + 1 sites starting from x 0 and defined in the same way as in (1) but using the Poisson processes N k,j (t 0 + ·), a ≤ j ≤ b, 0 ≤ k ≤ 2. When t 0 = 0 this superscript will be dropped. Moreover for any vector x ∈ R n , we let x(a : b) := (x(a), x(a + 1), . . . , x(b)).
We begin with the proof of (i), proceeding by induction on n. The case n = 1 is straightforward and for n = 2 the result is a consequence of Proposition 2. We now take n ≥ 3 and assume that (i) holds for any k < n. For Y ⊂ Z n−1 we define T Y = inf{t ≥ 0 : H n t ∈ Y }, and we also use the following notations: Let i ∈ {2, . . . , n − 1}. On the event E A i (t), after time t the value of X n t (i) is increased by one unit at the jump times of N 2,i , and only at these times. Indeed, for s ≤ t, if both H n s (i − 1) > 0 and H n s (i) < 0 then site V i (H n s ) = 2, and if one of them is 0 then any jump of site i is forbidden by the event We now use the fact that as long as H n t ∈ A i , the vectors X n t (1 : i − 1) and X n t (i + 1 : n) evolve like two independent crystal processes with i − 1 (resp. n − i) sites. Namely on the event , where x 0 = x 0 (1 : i − 1), and -X n t 0 +t (i + 1 : n) = X i+1:n,x r 0 ,t 0 t , where x r 0 = x 0 (i + 1 : n).
Thus the inductive hypothesis ensures that P x -a.s., and t −1 X n t (i + 1 : n) both converge and their limits have the form (5) .
Thanks to (11) and (12) it is sufficent to show that to achieve the proof. We first prove the existence of a constant r > 0 such that for On one hand, starting from h ∈ A i two transitions suffice to make site i strictly lower than its two neighbours, so there exists r 1 > 0 such that for any h ∈ A i , On the other hand if h is such that h (i − 1) > 0 and h (i) < 0, we have the P h -a.s.
. Since β 2 < β 0 basic considerations about Poisson processes give the existence of r 2 > 0 such that for any h as above, Hence (14) with r = r 1 r 2 follows from (15) and (16). Finally (13) will follow form (14), the strong Markov property and the fact that for any h / ∈ A, By the symmetry of the process, the case i = 1 may be treated as the case i = n. We now turn to (b) so we take an initial condition h ∈ B i \A. It is easy to see that there is some i ∈ {2, . . . n − 1} such that Note that on the event {T C i ∪A > t} all strict inequalities in (??) have to be preserved up to time t, hence {T C i ∪A > t} ⊂ {∀s ∈ [0, t], V i−1 (H n s ) = 1}. Consequently we have, for any configuration x whose shape is h: From a similar argument we also get: and combining the two last inclusions gives If β 1 > β 0 the probability in (19) is equal to 0 since a.s., where the last inequality is an easy consequence of the definition of v 2 . If β 1 = β 0 this probability is still null since h(i − 1) + X is then a symmetric random walk on Z. Now by symmetry the case i = 1 may be treated like the case i = n − 1. Finally, the distribution of the process (X 2 t , t ≥ 0) being exchangeable, we have In particular, P h (H n T C i ∪A ∈ A) ≥ 1/2 and this concludes the proof of (i). The proofs of (ii) and (iii) are based on the same ideas. Let us continue with (ii). It is straightforward for n = 1. For n = 2 it is easy: the process H 2 t ∈ Z is a nearest-neighbour random walk, namely |H 2 t | is increased by one unit at rate β 0 and decreased by one unit at rate β 1 (except of course at 0). We then have P h (F N ∪ F −N ) = 1, and this allows us to conclude. As we did for (i), we shall use Lemma 3 to show that P h (T A < +∞) = 1 for h / ∈ A, and conclude by induction. This time however, this is true only for n ≥ 4, so we first have to treat the case n = 3 separately. For n = 3 we let K 1 = {h ∈ Z 2 : h(1) ≥ 0, h(2) ≤ 0} and K 2 = {h ∈ Z 2 : h(1) ≤ 0, h(2) ≥ 0}. As in the proof of (i) there exists r > 0 such that P h (E K i (0)) ≥ r for any h ∈ K i , and once we know that H 3 t stays forever in one of these two sets, we are done. Putting K = K 1 ∪ K 2 it is again sufficient, thanks to the Markov property, to show that for h ∈ K c we have P h (T < + ∞) = 1. Take for exemple h(1), h(2) > 0. On the event {T K > t} we have H 3 t (1) = h(1) + N 1,1 (t) − N 1,2 (t), P h -a.s., so P h (T K = +∞) ≤ P(∀t ≥ 0, h(1) + N 1,1 (t) − N 1,2 (t) > 0) = 0 because of the recurrence of the symmetric random walk on Z. We now fix n ≥ 4 and check (a) in Lemma 3. Let h / ∈ (A ∪ B) and i be the unique site with maximal height in configuration h. We may suppose that i ≥ 3 without loss of generality thanks to the symmetry. On the event {T B > t}, we have s. Again we get P h (T B = +∞) = 0 by the recurrence of the symmetric random walk. Now we check (b) in Lemma 3. Let h ∈ B i \A. We suppose that 2 ≤ i ≤ n − 1, since the case i = 1 is the same as i = n − 1 thanks to the symmetry. On the event {T C i ∪A > t}, we have P h -a.s., We define the events We have Thus using the recurrence of the symmetric random walk we get (20) From the above remark on the crystal process with 2 sites, we obtain and consequently (20) and (21) imply that P h ({T C i ∪A = +∞}) = 0. The fact that P h (H n T C i ∪A ∈ A) ≥ 1/2 follows from a symmetry argument as in the proof of (i). Finally the proof of (iii) is analogous to the proof of (i). We shall show that with probability 1, some sites become, and remain forever higher than their neighbours. When this happens the configuration is broken in two disjoint parts, but this time infinite boundaries can be created and have to be taken into account. For this reason it is necessary to study the three types of boundary conditions (0, 1 or 2 infinite boundaries) to make the induction work. Thus our inductive hypothesis contains three statements. Let (H n ): For any x ∈ N n , t −1 X n t converges P x -a.s.(resp. P ∞ x -a.s. and P 0,∞ x -a.s.) to some random variable G (resp. G ∞ and G 0,∞ ), which takes the form (b , β 0 , a 1 , β 0 , a 2 , . . . , a k−1 , β 0 , a k , β 0 , b r ), where b and b r are given by: -for G, b , b r = (β 1 ) or (); -for G 0,∞ , b = (β 1 ) or (), and b r = (β 2 ) or (v 2,∞ , v 2,∞ ); It is tedious but easy to check that vectors of the form (22) concatenate together into a vector of E 3 . Since (H 1 ) and (H 2 ) are straightforward, the problem is again reduced to showing that the separation in two blocks occurs almost surely for n ≥ 3.
. . n (the signs of h(0) and h(n) are stressed by the boundaries), we have to show that: But β 0 now is the smallest parameter, so we easily get the analogous of (14), and it remains to prove that The first equality is straightforward since any configuration belongs to the set D. To prove the second and third equalities we can follow exactly the proof of (i), except that A i is replaced by D i , β 2 and β 0 invert their roles, v 2 is replaced by v 2,∞ and the sets C i are defined with opposite inequalities.

Proof of Theorems 3 and 4
We recall that in this section we always assume that We first need to introduce some further notations: -∆ s,t j X n := X n t (j) − X n s (j) ; -τ j t := sup{s ≤ t : ∆ j X n s = 0}. This is not a stopping time.
-P λ will stand for a random variable with Poisson(λ) distribution.
Remark. Since ∆ j X n t has the same distribution as −∆ n−j X n t , showing the exponential tightness of (H n t , t ≥ 0) amounts to checking that for any j, P 0 (∆ j X n t ≥ k) = O t exp (k), that is exponential tightness for ((∆ j X n t ) + , t ≥ 0). Lemma 4. We assume that for some C j , α j > 0, Then ((∆ j X n t ) + , t ≥ 0) is exponentially tight. Proof. Let t ≥ 0, and put m = min{q ∈ N : t − q ≤ k 2β 1 }. We have k/(4β 1 ) ≤ (t − m) ≤ k/(2β 1 ), as soon as k ≥ 4β 1 and t ≥ k 2β 1 . But we may suppose these two restrictions fulfilled: the first one because the conclusion does not depend on the values of P 0 (∆ j X n t ≥ k) for any finite number of k, and the second one because, if it is not then the conclusion easily follows from P 0 (∆ j X n t ≥ k) ≤ P(P β 1 t ≥ k) ≤ P(P k/2 ≥ k) = O exp (k). We decompose In this sum, the first term is less than P(N j, , and the second term is bounded by Lemma 5. Let β 0 < β 2 < β 1 and 1 ≤ j ≤ n−1. We suppose that for i = 1, . . . , j −1, ((∆ i X n t ) + , t ≥ 0) is exponentially tight, and that there exists d j < β 2 such that whereX j is defined by (8) Then ((∆ j X n t ) + , t ≥ 0) is exponentially tight. For j = n − 1, it is not necessary to assume (24).
Proof. We first take j ≤ n − 2, and choose a constant L > 0 such that The conclusion will follow from (23) that we now prove. The events The bound for P 0 (A 1 ) holds by assumption, and the bound for P 0 (A 2 ) holds because L }, so it now remains to show that for some C j , α j > 0, Denoting by A this last event, we note that Since }, the first term in the sum (26) is less than which is O exp (t − ) by (25). Putting E j = {x ∈ N j : max i=1,...j−1 ∆ i x ≤ t− L } and using the Markov property, the second term in (26) is less than where the first inequality follows from Lemma 2, the second one from Lemma 1 and the fact that max(x) − x(j) ≤ (j − 1)/L for x ∈ E j , and the equality is assumption (24). This concludes the proof for j ≤ n − 2. We now treat the case j = n − 1. Applying Theorem 1 toX n , we get (24) for some constant d n−1 < β 1 . This time we take L such that d n−1 + n−1 L < β 1 . We still have (26). Note that on the event {∆ n−1 X n t > 0}, we must have V n (X n t ) = 1. Hence in the sum (26) we proceed as for j < n − 1 for the second term, and the first term is less than P P β 1 (t− ) ≤ d n−1 + n−1 Proof of Theorem 3 . With (9) in mind, our hypothesis implies that β 2 >d k (β 1 ) for k ≤ n. Hence we only have to prove the desired result with k = n + 2. Let us take r ∈]d n (β 1 ), β 2 [. By Lemma 2, we have We show by induction on j that for j = 1, . . . n + 1, we have: which by the remark preceding Lemma 4 is a sufficient condition for H n+2 to be ergodic. For j = 1 we simply apply Lemma 5, whose assumptions are clearly satisfied since β 2 > β 0 andX 1 t is a simple Poisson process with intensity β 0 . For j ≤ n, the fact that (H i ), i = 1, . . . , j − 1, imply (H j ) is a direct consequence of Lemma 5. For j = n + 1 it is still the case using the last assertion of Lemma 5.
Then P 0 X n t (n) ≥ (g n + ε) t; min because in any configuration x, V j (x) = 0 for at least one site, hence ΣX n t is dominated by a Poisson process with intensity ng n . The fact that also is a direct consequence of Theorem 1.
We finally turn to the proof of (29), and let t 0 := 1/(4 √ 2 √ β 1 β 0 ). We shall show that for k ≥ 1, j ≥ 0 and 1 ≤ i ≤ n, This implies the desired result: if (30) holds, then for ε > 0, Here t/t 0 stands for the integer part of t/t 0 . To prove (30) we proceed by induction, showing that (H ) holds for any ≥ 1, where In this proof we may and will suppose that since otherwise we easily getd n (β 1 ) ≤ β 1 ≤ 2β 0 ≤ 4 √ 2 √ β 1 β 0 . For readability we define τ v,d := inf{s ≥ 0 :X n s (v) = d}. A site i is said to be a seed at level if the -th square to be deposed at site i is added at a moment when site i is at least as high as its neighbours. This means that For i 1 ≤ i 2 we say that i 1 extends to i 2 during the time interval [s, t] if N 1,i 1 +1 , N 1,i 1 +2 , . . . , N 1,i 2 jump successively between times s and t. For i 1 > i 2 this definition is extended in an obvious way. Inequality (30) for j = 0, and hence (H 1 ), are straightforward. We now suppose that (H ) holds and take i ∈ {1, . . . n} and k, j ≥ 1 with k + j = l + 1. Then For this last event to be realized, the three following conditions have to be satisfied: -τ u,k+j−2 ≤ mt 0 , -N 0,u jumps at least one time in the time interval [max(τ u,k+j−2 , (m−1)t 0 ), mt 0 ], -u extends to i after the first one of these jumps.
Using the fact that the Poisson distribution satisfies P(P λ ≥ 1) ≤ λ, it follows that and by the inductive hypothesis, The last inequality is a consequence of (31) and the choice of t 0 .
Proof. Here we denote by A the set of configurations with a hole: For x ∈ N 3 we also say that x ∈ A if the shape of x belongs to A. We define a double sequence of stopping times by letting: T 0 = 0, U 1 = inf{t ≥ 0 : X 3 t ∈ A}, T 1 = inf{t > U 1 : X 3 t / ∈ A}, U k+1 = inf{t ≥ T k : X 3 t ∈ A}, T k+1 = inf{t > U k+1 : X 3 t / ∈ A}, for k ≥ 2.
We also define Y n := ΣX 3 Tn . The desired result will follow if we show that lim n→∞ Y n /T n ≥ 9β 0 − 3ε. We first claim that the sequence I n := Y n − (9β 0 − 3ε)T n is a submartingale. By the strong Markov property this is the case if for any x / ∈ A, To establish this inequality we make the following observations: -Let Z 1 = ΣX 3 U 1 be the number of jumps before hitting A. Starting from any y / ∈ A the probability that the first jump leads to A is less than β 0 /β 1 . Hence by the Markov property, Z 1 is stochastically larger than the geometrical distribution with parameter β 0 /β 1 , and consequently -Any y / ∈ A with Σy / ∈ 3Z has at least one site j such that V j (y) = 1. Thus conditionally on Z 1 , at least (2Z 1 /3 − 1) transitions until time U 1 occur with a rate larger than β 1 , and the others occur with a rate at least 3β 0 . We deduce from this remark that E x [U 1 |Z 1 ] ≤ 2Z 1 3β 1 + Z 1 /3+1 3β 0 , and hence -The configuration X 3 U 1 belongs to the set A ∩ {x ∈ N 3 : x(1) = x(2) + 1 or x(3) = x(2) + 1}. But clearly, for any y in that set, the exit time from A starting from y is stochastically smaller than the hitting time of 0 for a birth and death process on Z + starting from 1, with birth rate β 0 and death rate β 2 . Hence Now (35) and (36) yield Condition (32) implies that (2ε − 6β 0 )/β 1 + ε/(3β 0 ) ≥ 0. Using (34) we then have The process m t = min X 1:2 t (2), X 4:5 t (1), . . . , X n−4:n−3 t (2), X n−1:n t (1) satisfies lim t→∞ m t t = v 2 , and is independent of the Poisson processes N 3,2 , N 6,2 , . . . , N n−2,2 . Hence the result follows from (37) and the fact that β 2 < v 2 .