Fluctuations of a Surface Submitted to a Random Average Process

We consider a hypersurface of dimension $d$ imbedded in a $d+1$ dimensional space. For each $x\in Z^d$, let $\eta_t(x)\in R$ be the height of the surface at site $x$ at time $t$. At rate $1$ the $x$-th height is updated to a random convex combination of the heights of the `neighbors' of $x$. The distribution of the convex combination is translation invariant and does not depend on the heights. This motion, named the random average process (RAP), is one of the linear processes introduced by Liggett (1985). Special cases of RAP are a type of smoothing process (when the convex combination is deterministic) and the voter model (when the convex combination concentrates on one site chosen at random). We start the heights located on a hyperplane passing through the origin but different from the trivial one $\eta(x)\equiv 0$. We show that, when the convex combination is neither deterministic nor concentrating on one site, the variance of the height at the origin at time $t$ is proportional to the number of returns to the origin of a symmetric random walk of dimension $d$. Under mild conditions on the distribution of the random convex combination, this gives variance of the order of $t^{1/2}$ in dimension $d=1$, $\log t$ in dimension $d=2$ and bounded in $t$ in dimensions $d\ge 3$. We also show that for each initial hyperplane the process as seen from the height at the origin converges to an invariant measure on the hyper surfaces conserving the initial asymptotic slope. The height at the origin satisfies a central limit theorem. To obtain the results we use a corresponding probabilistic cellular automaton for which similar results are derived. This automaton corresponds to the product of (infinitely dimensional) independent random matrices whose rows are independent.


Introduction
We consider a stochastic process η t in R Z d . To each site i ∈ Z d at each time t corresponds a height η t (i). These heights evolve according to the following rule. For each i ∈ Z d let u(i, ·) be a random probability distribution on Z d . Each height has an independent Poisson clock of parameter 1. When the clock rings for site i at time t, a realization of u is chosen, independent of everything, and then the height at site i moves to the position j∈Z d u(i, j)η t (j).
In other words, at rate one each height is replaced by a random convex combination of the current heights. The weights of this convex combination are chosen independently at each time. We call this process the random average process (RAP). The motion is well defined under suitable conditions on the distributions of u and η 0 . The formal generator is given by ( 1.1) where ν is the distribution of the matrix u and u i is defined as the operator: (1.2) That is, u i replaces the height at i by a convex combination of the heights at Z d .
The RAP is a special case of the linear processes of chapter IX of Liggett (1985). Particular examples of the RAP are the noiseless smoothing process and the voter model.
In the noiseless smoothing process the distribution ν concentrates all its mass on a constant matrix; that is, there exists a matrix a such that ν(u = a) = 1. Liggett and Spitzer (1981), Andjel (1985) and Liggett (1985) studied the (noisy) smoothing process: at each event of the Poisson processes, the (deterministic) random convex combination is multiplied by an independent positive random variable W of mean 1 -the noise. The above papers studied questions about existence and ergodicity of the process when the heights are restricted to be non negative. It would be interesting to study the ergodicity questions for the case of general initial conditions. In the voter model ν concentrates mass on the set of matrices u with the following property: for each i there is exactly one j with u(i, j) different from zero -and hence equal to 1. In other words, when the clock rings for height i, it is replaced by the height j for which u(i, j) = 1. In the usual voter model the heights assume only two values. See Durrett (1996) for a recent review on the voter model.
In this paper we will concentrate on the other cases, but some marginal results will concern the smoothing process and the voter model. The latter model will be discussed briefly also in the final section of the paper.
For most of our results the initial configuration is a hyperplane passing through the origin: given a vector λ ∈ R d , for each i ∈ Z d we set the (matrix) product of i and the transpose of λ (i.e. the inner product). Notice that if λ ≡ 0, then η t ≡ 0 because this initial configuration is invariant for the process. We assume that the distribution of u(i, j) is translation invariant: u(i, j) and u(i + k, j + k) have the same distribution. Our main result is to show that when the initial configuration is a hyperplane, the variance of the height at the origin at time t is proportional to the expected time spent in the origin by a symmetric random walk perturbed at the origin. Denoting V as the variance, we show that if η 0 (i) = iλ * for all i, then and D t is a random walk (symmetric, perturbed at the origin) with transition rates q(0, k) = j ν(u(0, j)u(0, j + k)) q( , + k) = νu(0, k) + νu(0, −k), = 0.
Our result implies that starting with a hyperplane perpendicular to λ, the fluctuations of the height at the origin behave asymptotically as follows: for 0 < σ 2 + µ 2 < ∞, (1.6) Here the phrase "f(t) is of the order of g(t)" means that we can find positive constants A and B such that Ag(t) ≤ f(t) ≤ Bg(t) for all t ≥ 0.
In Section 4 we show a central limit theorem for η t (0) when the initial configuration is an hyperplane.
We are able to obtain the asymptotic behavior of the variance for random initial configurations only in some particular one dimensional cases (d = 1). When the initial configuration is a hyperplane and σ 2 > 0 the asymptotic variance of the height at the origin is of the same order for both biased (µ = 0) and unbiased (µ = 0) cases. If the initial configuration is distributed according to a measure with spatial fluctuations, one expects the biased and unbiased cases to have different asymptotic variances: the height at the origin in the biased case should pick up the spatial fluctuations of the initial configuration. This is the case in a one dimensional example as we show in Section 6.
We study the asymptotic distribution of the process. We show that, starting with a hyperplane, the process as seen from the height at the originη t defined byη t (i) = η t (i) − η t (0) converges weakly to an invariant measure. Furthermore we show that, ifη is distributed according to the limiting invariant measure, then the mean of the differences is conserved, i.e. E[η(i)−η(j)] = E[η 0 (i)− η 0 (j)] and the variance of the differences has the following asymptotic behavior: for 0 < σ 2 + µ 2 < ∞, (1.7) (the use of the phrase "is of the order of" makes sense because by translation invariance the distribution ofη(i) −η(j) depends only on i − j). To prove the above in d = 2 we need a stronger condition: that a second-plus-delta moment is finite: j |j| 2+δ νu(0, j) < ∞ for some δ > 0. Except for a few one dimensional cases, we are unable to further describe this measure. It is reasonable to conjecture that the limiting measure is the unique invariant measure forη t with the same slope as the initial hyperplane.
Asymptotic results similar to (1.6) and (1.7) have been obtained by Hammersley (1967) for a system in Z d called harness. In this system, each coordinate takes the mean value of the heights of some fixed set of neighbors plus a zero mean random variable. Toom (1998) studies the tail behavior of harnesses.
In the one dimensional RAP, if u(i, i + 1) = 1 − u(i, i − 1), then each height is replaced by a convex combination of the nearest neighboring heights. In this case the heights remain ordered if started so and when projected on a vertical line form a point process. We can think that a particle is located at each event of this point process. In a previous unpublished work, the authors call the resulting process on the particle configuration conservative nearest-particle system. If u(i, i + 1) is uniformly distributed in [0, 1], then the Poisson process of parameter ρ is reversible for the particle system, for each ρ > 0. In Section 6 we describe the correspondence between the RAP and the particle system and correct a statement of Ferrari (1996) about the asymptotic variance of a tagged particle in the conservative nearest-particle system. Kipnis, Marchioro and Presutti (1982) studied the nearest-neighbor one dimensional process with u(i, i+1) = 1−u(i, i−1) uniformly distributed in [0, 1], i ∈ Z. They consider L differences between successive RAP-heights and impose different boundary conditions in the extremes of the interval {0, . . . , L − 1}. In the limit as L → ∞ they show that the height difference profile in the normalized interval (so that the length is kept to be 1) is a linear interpolation of the boundary conditions. This implies that the hydrodynamic limit of the height difference process corresponds to the heat equation. We discuss the relationship between this limiting equation and the one obtained for the RAP. For d ≥ 1 we show that if µ = 0, then the hydrodynamic equation for the RAP is also the heat equation. If µ = 0 then the hydrodynamic equation is a linear transport equation. See Section 8.
We perform a Harris graphical construction of the RAP: a family of independent rate-one Poisson processes indexed by the sites i ∈ Z d is considered. To each time event of each Poisson process a realization of u independent of everything is attached. If at time t a Poisson mark appears at site i and u is the corresponding realization of the weights, then at time t the new configuration is u i η (t − ) , where u i was defined in (1.2). We show that this construction works if E|j| 2 u(0, j) < ∞ and the initial configuration η 0 satisfies η 0 (j) ≤ C|j| 2 for all j and some constant C independent of j.
Consider a random walkỸ t ∈ Z d with rates νu(i, j) to jump from i to j, i, j ∈ Z d . To construct it we use the same Poisson marks and realizations of u used in the construction of η t . The motion is the following: ifỸ t − = i, a Poisson mark of site i appears at time t and u is the realization of the weights corresponding to this mark, thenỸ t will be equal to j with probability u(i, j). (The probability space must be conveniently enlarged in order to perform this walk.) Call F t the sigma algebra generated by the Poisson marks and the realization of the u's on these marks between 0 and t. When conditioned to F t ,Ỹ t can be seen as a "random walk in a space-time random environment". Provided that the Harris construction is feasible, the following duality relation holds almost surely: where Y x,t s , 0 ≤ s ≤ t is the walk following the marks from t to 0 backwards in time starting at the site x. Notice thatỸ t and Y 0,t t have the same distribution. A key piece of our proofs is the fact that E(Ỹ t | F t ) is a martingale.
The case where u is constant, that is, when there exists a stochastic matrix a such that ν(u = a) = 1, (1.8) corresponds to a potlatch process which is the dual of the smoothing process. When u is not allowed to have more than one positive coordinate -equal to one and the others equal to zero -(Y x,t s : 0 ≤ s ≤ t) conditioned on F t is just a realization of the deterministic walk starting at x in the time interval [0, t] with transition probabilities given by the distribution of u. Hence in this case and the processes (Y x,t s : 0 ≤ s ≤ t, x ∈ Z d ) perform coalescing random walks, dual of the voter model.
To show the above results we prefer to introduce a probabilistic cellular automaton (PCA). The time of the PCA is discrete and at all times each site chooses an independent random convex combination of its own height and the heights at the other sites and updates its height to this new one. If we denote by X n the vector of heights at time n indexed by Z d , and by u n (·, ·) a random sequence of iid stochastic matrices, then X * n = ( n k=1 u n−k )X * 0 . The matrices u n have also the property that {u n (i, i + ·) : i ∈ Z d } is a family of iid random vectors for n ≥ 1. The results described above are shown for this PCA and then a standard approximation of the particle system by a family of PCA's is performed.
In Section 2 we introduce the PCA and prove in Proposition 2.3 the discrete-time version of (1.4). In Section 3 we prove some estimates for the discrete version of the random walk D t which lead to the asymptotics (1.6). In Section 4 we show the central limit theorem for the discrete version of η t (0) starting with the hyperplane. In Section 5 we show that the process starting with the hyperplane converges to an invariant measure with the properties (1.7). In Section 6 we discuss the one dimensional biased case and show that, when the initial distribution of height differences is an independent positive increment process with fluctuations, these fluctuations appear in the variance of the height at the origin. In Section 7 we discuss the passage from discrete to continuous time and in Section 8 we discuss the hydrodynamic limit. In Section 9, we discuss briefly the voter model.

Discrete time: Probabilistic cellular automaton
The process we study in this Section is a discrete time system or probabilistic cellular automaton whose configurations belong to R Z d . Under this evolution at time n each site chooses a random convex combination of its own height and the heights of the other sites at time n − 1 and jumps to this new height. Let (X n ) n≥0 denote the height system: X n (i) is the height of site i at time n. The formal definition goes as follows. Let {u n (i, i + ·), n ≥ 1, i ∈ Z d } be an i.i.d. family of random probability vectors distributed in [0, 1] Z d independent of X 0 . Denote by P and E the probability and expectation induced by this family of random variables. Call ν the (marginal) distribution of u 1 (·, ·). Under ν, for all i, j ∈ Z d , u 1 (i, j) ∈ [0, 1] and j u 1 (i, j) = 1 almost surely.
We further assume where |j| = √ jj * . To avoid dealing with periodicity, we assume that For n ≥ 1 and i ∈ Z d we define X n (i) as This may define a degenerate random variable. In the case u n (i, j) = 0 if |i − j| > M for some M (finite range), (2.3) defines a Markov process on {0, 1} Z d . After introducing the concept of duality below we give sufficient conditions for (2.3) to define a Markov process. Definition (2.3) means that the transpose of the height process at time n is the product of n infinite dimensional independent identically distributed random matrices times the transpose of the heights vector at time 0: Notice that not only the matrices are independent, but the rows inside each matrix are independent too.
The height at the origin at time n, X n (0), can be expressed as a conditional expectation of a function of a simple random walk in a (space-time) random environment. To our probability space attach a family {w n (i) : n ≥ 0, i ∈ Z d } of iid random variables uniformly distributed in [0, 1] and independent of everything else. For 1 ≤ k ≤ n, let Y x,n k denote the position at time k of a random walk in Z d running up to time n, starting at site x defined by Y x,n 0 = x and for 1 ≤ k ≤ n: has the following transition probabilities P(Y x,n k = j|Y x,n k−1 = i, F n ) = u n−k (i, j) =: v k−1 (i, j) In words, conditioned to F n , the walk at time k jumps from i to j with probability v k (i, j). Since this walk uses the u n (i, j) backwards in time, we will call Y x,n n the backward walk.
for all n ≥ 0. Then X n (x) is well defined for all n and x and (2.5) In particular, if X 0 (i) = iλ * for some deterministic vector λ, then Proof. We first want to show that for all n and x. Let us first consider |x| = 1.
Again by Fubini and a straightforward induction argument, we have that This finishes the proof.
From (2.1), we have that E|Y n | 2 < ∞. Hence, a sufficient condition for the existence of the process under this hypothesis is that E|X 0 (j)| ≤ C|j| 2 (2.13) for some positive constant C. 1 1 Strengthenings of (2.1) allow for weakenings of (2.13). (2.14) Consider alsoZ n (x), the position at time n of the (forward) process, defined bỹ whereỸ x n is a (forward) random walk in a random environment starting at x and such that, where w n (i) is the same sequence of iid random variables uniformly distributed in [0, 1] defined above and {I k (i, j) : j} is the partition of the interval [0, 1] with lengths |I k (i, j)| = u k (i, j) defined above. Given F n , the conditional probabilities for this walk are Remark 2.1.
Let X n , Z n andZ n denote the vectors X n (·), Z n (·) andZ n (·), respectively. Clearly, for all n ≥ 0 in distribution. We will resort to this relation in most of what follows. One of its main uses here is based on the fact that the centered forward processZ n is a martingale. We observe that even if the fixed time marginal distribution of the backward and forward walks coincide, it is not true that the backward process is a martingale. This explains why we introduce the forward process. (2.20) From (2.18), for all n, we have that in distribution, where Z n := Z n (·) andZ n :=Z n (·).
For the remainder of this section, and in the next three, we studyZ n , its asymptotics and those for related processes. The relationships (2.6), (2.14) and (2.18) then yield asymptotics for X n starting on an inclined hyperplane (that is, starting on X 0 (i) = iλ * for all i, where λ is a non-null deterministic vector of Z d ).

Martingale property and variance of the forward processZ n
We start by showing that the centered forward process is a martingale. Let µ = j jλ * ν[u 1 (0, j)].
Lemma 2.2. The processZ n − nµ is a martingale with respect to the filtration {F n : n ≥ 0}.
Proof. We have, for n ≥ 1 and x ∈ Z d , for all n and k. Iterating, we getZ Now, by the independence of u i (k, ·) from F i−1 for all i and k. The result follows from (2.24) and (2.25) by multiplying by λ * .
Let D := {D n , n ≥ 0} be a Markov chain in Z d with the following transition probabilities: Notice that these are homogeneous off the origin. Indeed, for = 0 γ( , k) depends only on k − : where the first equality follows by the independence of u 1 (0, ·) and u 1 ( , ·) and the second by the translation invariance of u 1 (i, i + ·).
But at the origin, there obviously is no independence and thus no factoring, which produces an inhomogeneity there. Nevertheless, it is useful to think of D as a random walk with an inhomogeneity of the transition probabilities at the origin, as the results of the next section attest. By an abuse of tradition, we adopt the terminology.
The next result relates the variance ofZ n (0) to the number of visits of D n to the origin up to time n. Let The finiteness of σ 2 is implied by the second moment condition (2.1).
Corollary 2.4. When X n starts as a hyperplane, that is, when Proof of Corollary 2.4. Immediate from Proposition 2.3, (2.18) and (2.6).
Proof of Proposition 2.3. SinceZ n (0) − nµ is a martingale with increments defines a centered random variable. Hence we have )} = 0 for all k = l, by the independence among the u's. To get the result by iteration it remains only to verify that for all n ≥ 1. For that, letD n =Ỹ 0 n −Ŷ 0 n , withŶ 0 n an independent copy ofỸ 0 n (given F n ). It is immediate that (2.34) holds with the right hand side replaced by P(D n = 0).
Since D n also satisfies the same relations (2.36), (2.37), we conclude that 2 for all k, in particular k = 0, and this establishes (2.34).
Lemma 2.5. The transitions γ( , k) given by (2.26) correspond to those of a symmetric random walk perturbed at the origin.
Proof. By translation invariance and for = 0, by translation invariance and independence of u 1 (0, ·) and u 1 ( , ·), This finishes the proof.

Random walk estimates
We show in this section various asymptotic results involving P(D n = 0|D 0 = 0) and related quantities. We use these results to prove the discrete-time version of the asymptotics (1.6) in Corollary 3.5 below.
We will exclude, for most of this and the coming sections, the case when u concentrates on delta functions. That is, we will focus on the case that (3.1) Otherwise, the RAP will be a Voter Model and will behave qualitatively in a different way. We will discuss the Voter Model briefly in the final section. Till then, unless explicitly noticed, we will be restricted to (3.1).
Recall that D n was defined in (2.26). The random walk D n is not spatially homogeneous (but almost). The (time homogeneous) transition probabilities are distinct (only) at the origin. This poses a small technical problem for deriving the results of this section, since the supporting random walk results we need will be quoted from Spitzer (1964), which treats only the homogeneous case. The second moment condition assumed in Section 2 implies a second moment condition for the random walk D n , which is then seen to be recurrent. It is also aperiodic due to (2.2).
We start by proving monotonicity of P(D n = 0|D 0 = 0) with respect to n.
Proof. Equation (5.8) below says that P(D n+1 = 0|D 1 = 0) − P(D n+1 = 0|D 1 = k) is the increment on the variance of a martingale. It thus has to be non negative. Then Our next step is to calculate the power series of P(D n = 0|D 0 = 0) in terms of that of the related quantity P(T = n), where T is the return time of the walk to the origin after leaving it. We will establish a comparison between the first power series and that for the homogeneous walk, from which the asymptotics of interest of the two walks are shown to be the same. Let be the power series of P(D n = 0|D 0 = 0) and P(H n = 0|H 0 = 0), respectively, where H n is the homogeneous random walk with transition probability function for all and k. Notice that the transition functions of D n and H n differ only at the origin.
Proof. Let {g i , i = 1, 2, . . .} be the successive waiting times of the walk at the origin and {T i , i = 1, 2, . . .} the successive return times to the origin after leaving it and let G i and τ i denote their partial sums, respectively.
We then have The last probability can be written as where γ was defined in (3.2). Thus and forming the power series, we get where the second identity follows from the fact that the right hand side in (3.6) represents the general term in a power series obtained as the product of the two power series in the right hand side of that identity. ψ T (s) denotes E(s T ) and Similarly, andT is the return time of the walk H to the origin after leaving it. Thus, where T x is the hitting time of the origin of the walk starting at x, p x , x = 0, is the distribution of the jump from the origin and φ Tx = n≥0 P(T x > n)s n .
We have, on the one hand, that and, on the other, by P32.2 in Spitzer (1964) for all x. By P28.4 and P12.3 in Spitzer (1964) pp.345 and 124, respectively, a is integrable with respect to (p x ) since |x| is (we leave the latter to be checked by the reader).
To be able to apply the dominated convergence theorem to conclude that we need to find b(x) integrable with respect to (p x ) such that for all s < 1.
For that, let N denote the set of nearest neighboring sites to the origin. Let where (p x ) is the distribution of the jump of H from the origin. By (2.2), ℘ > 0. Notice first that Now notice that T x is stochastically smaller than the sum of ||x|| 1 independent copies of τ . Thus, where φ τ (s) = n≥0 P(τ > n)s n .
Next, we will use the behavior of f(s) as s → 1 to read the behavior of n−1 i=0 P(D i = 0|D 0 = 0) as n → ∞. This in turn gives information on n i=1 P(D i = 0|D 0 = 0), the expected number of visits to the origin, from the fact that for all large enough n. To get the lower bound above, we write Then the monotonicity of Lemma 3.1 allows us to bound the last term from above by where the inequality is justified again by the monotonicity of P(D i = 0|D 0 = 0). The bound follows.
For the upper bound, we write The factor in front of the above sum is bounded below by (2e) −1 for all large enough n and the bound follows.
In d = 1 and 2, we conclude that n−1 i=0 P(D i = 0|D 0 = 0) is of the order of √ n and log n, respectively. In d ≥ 3, it is bounded.
Corollary 3.4. Let λ be a non null deterministic vector in Z d and assume σ 2 > 0. Then

Proof. Follows from Lemma 3.3 and Proposition 3.2.
Corollary 3.5. Let X 0 (i) = iλ * for all i with λ a non-null deterministic vector of Z d and assume σ 2 > 0. Then the asymptotics (3.25) hold for X n (0).
Remark 3.2. Corollary 3.4 is the discrete-time version of (1.6). Notice however that there is a difference. While in the discrete time one multiplies by σ 2 , in the continuous time the constant is σ 2 + µ 2 . In particular this means that for the smoothing process, for which σ 2 = 0, if µ = 0, we get nontrivial fluctuations in continuous time and no fluctuations in discrete time. In the latter case the motion reduces to a hyperplane that moves deterministically at speed µ. In the continuous case the nontrivial fluctuations come from the fluctuations of the Poisson processes.
Remark 3.3. Exact expressions for the constants of proportionality in 1 and 2 dimensions follow from those mentioned in Remark 3.1 and from strengthenings of (3.22). For the latter, in 2 dimensions, we can use the lower and upper bounds f(1 − log n/n) and f(1 − 1/n log n)+ const., respectively, instead on (3.22). In 1 dimension, we can use P20.2 in Spitzer(1964), on p.225. We leave the details for the interested reader.
Remark 3.4 In the case where assumption (3.1) does not hold, γ(0, 0) = 1 and thus D n is absorbed at the origin. It follows that P(D n = 0|D 0 = 0) ≡ 1 and P n = n. V(Z n (0)) is then of order n in all dimensions in this case and, provided σ 2 > 0, so is the variance of X n (0) starting from X 0 (i) = iλ * . (This is the case of the voter model, to be discussed in the last section.)

Central limit theorem
In this section we prove a central limit theorem forZ n (0). By (2.6) and (2.18), it then holds also for the the coordinate heights of the RAP starting as a hyperplane. For simplicity, we study only the one dimension nearest-neighbor case where u 1 (i, j) = 0 for |i − j| = 1. This contradicts part of assumption (2.2). The reason to adopt it in this section -and only in this section, which is independent of the following ones -is that, in this case, the chainD has nearest neighbor jumps only (but in 2Z!); thus to go from a site to the other, it necessarily passes first through all intermediate sites. The results from the previous sections that we will need, the martingale property ofZ n (0) − nµ and the divergence to ∞ of P n as n → ∞, are both valid in this case, with the same proofs as for the original case in the previous sections.) We will also take λ = 1. The case of general jumps (not necessarily nearest neighbors) in d = 1 and d ≥ 2 can be similarly treated.
Notice that in the case we are considering P n is of the order of √ n as argued at the end of the previous section.
Corollary 4.2. If X n starts as the straight line with 45 o of inclination, then X n (0) satisfies the same Central Limit Theorem asZ n (0).
Proof of Theorem 4.1. To establish this CLT, it is enough to check the two conditions of the corollary to Theorem 3.2 in Hall and Heyde (1980) (identifying X ni there with P −1/2 nW i and F ni with F i ). The first condition is trivially satisfied while the second one can be written as in probability as n → ∞. Calculating the expectations above by further conditioning onỸ 0 i−1 (as in many of the calculations so far), we can write (4.1) as We note at this point that, by the simplifying assumptions at the beginning of the section, D lives in 2Z and takes nearest neighbor jumps only.
Back to (4.2), it suffices to prove that the variance of its left hand side goes to 0 as n → ∞. For that, write the variance of the sum in (4.2) as (4.3) Some terms in the sum inside the expectation sign cancel to yield We look now into the summands above. We first condition inD j to get k P(D i = 0|D j = k) P(D j = k|F j ) − P(D j = k|F j−1 ) . (4.5) We further condition inỸ 0 j−1 andŶ 0 j−1 to get k,l,l (4.6) Let us introduce the notation u n,k := u n (k, k + 1). (4.7) Notice that the possible values for k in (4.6) are l − 2, l and l + 2 3 and that P(D j = l − 2|Ỹ 0 j−1 = l + l,Ŷ 0 j−1 = l , F j ) = (1 − u j,l +l )u j,l , P(D j = l|Ỹ 0 j−1 = l + l,Ŷ 0 j−1 = l , F j ) = (1 − u j,l +l )(1 − u j,l ) + u j,l +l u j,l and P(D j = l + 2|Ỹ 0 j−1 = l + l,Ŷ 0 j−1 = l , F j ) = (1 − u j,l )u j,l +l .
Substituting in (4.6), we get, after some more manipulation, We will analyze explicitly only the first sum above by taking the sum over i, squaring and taking expectations. The second one has the same behavior.
To alleviate notation, let us define We want then to estimate (4.12) We will show below that (4.12) is (at most) of the order of P(D j = 0). Substituting in (4.3) and performing the sum in j, we get a term of the order of P n . To find (an upper bound to) the variance of (4.2), we multiply by constant times P −2 n (already taking into account the second sum in (4.8)). Since P n → ∞ as n → ∞, the last variance then goes to 0 as n → ∞ as desired.
We rewrite (4.12) as where we use the independence between the u's and p's (the A's are deterministic).

Now we bound (4.13) by constant times
(4.14) since |A n j,l | is uniformly bounded in l, j and n (we will prove this below). The expectation inside the sum above vanishes if {l, l } ∩ {k, k } = ∅. The sum over pairs with full intersection is bounded above by (4.15) The expectation inside the sum is constant for l = l and separately for l = l . Thus it is bounded above uniformly by a constant, so this part of the sum is bounded above by constant times For pairs with intersection at only one point, we argue similarly to bound the sum by constant times All cases have been considered and we thus have the result. Thus (4.18) becomes The probabilities inside the sums are maxima when l = 0 (by the martingale argument already used in the proof of Lemma 3.1, which applies to this case as well). Rewrite Now sinceD is a one dimensional simple symmetric random walk (in 2Z; it is homogeneous off the origin and T does not depend on the transitions from the origin), we have that P(T > n) is of the order of n −1/2 (see Spitzer (1964) P32.1 on p.378 and P32.3 on p.381). By the arguments at the end of the previous section, so is P(D n = 0). This implies that (4.25) is bounded in n. The argument is finished.

Convergence to an invariant distribution
In this section we prove thatZ n as seen from the height at the origin converges almost surely. This has the immediate consequence that X n when starting with a hyperplane converges weakly as seen from the height at the origin.
Proposition 5.1.Ẑ n converges almost surely. LetẐ ∞ denote the limit. If σ 2 > 0, then Corollary 5.2 below contains the discrete-time version of the asymptotics announced in (1.7). In Proposition 5.3 we show that the weak limit is invariant for the process.
Corollary 5.2. When X n starts as a hyperplane, that is, when X 0 = {iλ * , i ∈ Z d }, with λ a deterministic vector of Z d , thenX n converges weakly. LetX ∞ denote the weak limit. If σ 2 > 0, then

Proof of Corollary 5.2.
Under the assumed initial conditions, by (2.6),X n = X n λ * − X n (0)λ * and it is clear that the last quantity equalsẐ n in distribution.
Proof of Proposition 5.1. We start proving the proposition by showing that for each x ∈ Z d , E(Ẑ n (x) − xλ * ) 2 is uniformly bounded in n. Since it is a martingale, it then converges almost surely and in L 2 (see Theorem 2.5 and Corollary 2.2 in Hall and Heyde (1980) for instance). Given x, V(Ẑ n (x)) = 2V(Z n (0)) − 2C(Z n (0),Z n (x) − xλ * ), (5.5) where C denotes the covariance, sinceZ n (x) − xλ * andZ n (0) are equally distributed by the translation invariance of the model. We already have an expression for the first term on the right hand side from Proposition 2.3. We need to derive one for the last one, which we do now. We already have from the proof of Lemma 2.2 that the last equality due to the independence of at least one of the θ's (the one with higher subscript) of the conditional probabilities and of the other θ when i = j, and due to the null mean of the (θλ * − µ)'s. σ 2 is the second moment of the θλ * 's. Now, reasoning as in the proof of Proposition 2.3, From the above discussion, we conclude that After a similar reasoning as used in the proof of Lemma 4.3 above, one sees that the last expression equals n k=0 P(D n−k = 0|D 0 = 0)P(T x > k), where T x is the hitting time of the origin by the walk D starting at x.
Since the above expression is the variance of a martingale, it is monotone in n. To show it is bounded, it suffices to consider its power series (in the variable 0 ≤ s < 1) and show it is bounded as s → 1 when multiplied by 1− s. The limiting value of this procedure gives the limit in n of (5.9), which in turn gives the variance ofẐ ∞ (x), by the L 2 Martingale Convergence Theorem. where f(s) is the power series of P(D n = 0|D 0 = 0) and φ Tx that of P(T x > n) (as in Section 3).
Remark 5.1. Substituting (5.13) into (5.8), we get (5.14) In 3 or more dimensions, the expression in (5.8) is the difference of two positive quantities, the first of which is bounded in n (of course then also the second, which is smaller), as follows from Spitzer(1964), P7.9 on p.75. We conclude that it is bounded in n and x. Now we argue the asymptotics (5.4). Assume σ 2 > 0.
We first remark that in 3 or more dimensions there is no space scaling, since (5.8) is bounded in x as well as in n as seen above.
In 1 and 2 dimensions, the spatial scaling is given by the scaling of a(x), as readily seen from (5.14).
Remark 5.2. The case σ 2 = 0 corresponds to the discrete time smoothing process with no randomness (W ≡ 1 in Definition (0.1) of Liggett (1985)). In this case the hyperplanes are invariant configurations.
Proposition 5.2. LetX n defined byX n (x) = X n (x) − X n (0) be the RAP as seen from the height at the origin. ThenẐ ∞ is invariant forX n . That is, if u is a copy of u 1 independent of Z ∞ , then uẐ ∞ andẐ ∞ have the same distribution.
Proof. Let u be a copy of u 1 independent ofẐ ∞ andẐ n , n ≥ 1. and let u 0 denote the matrix that has all its rows identical to the vector {u(0, x), x}. Now, letû = u − u 0 . To prove the proposition, it suffices to show thatûẐ * ∞ =Ẑ * ∞ in distribution. But that is clear from the facts thatûẐ * n =Ẑ * n+1 in distribution and thatûẐ * n converges weakly toûẐ * ∞ . The latter point is argued from the L 2 convergence of uẐ * n and u 0Ẑ * n as follows. Given x, (5.15) where the inequality follows by Jensen and the last identity is due to space translation invariance of the distribution ofẐ n (y) −Ẑ ∞ (y). The same inequality is true with u replaced by u 0 since it amounts to replacing x by 0. One concludes (5.16) and L 2 convergence follows from the L 2 Martingale Convergence Theorem.

The one dimensional case
In the one dimensional case we are able to treat random initial conditions. Let (X n ) n≥0 denote the height system and (x i ) i∈Z be its initial configuration. Thus we have with Ey i = 1/α > 0 and Vy i = β 2 . Notice that this distribution satisfies the sufficient condition for the existence of the process given by (2.13). Denote y i − (1/α) byȳ i and letX 0 (i) = X 0 (i) − (1/α)i and S n (0) = E(X 0 (Y 0 n )|F n ). We then have, by (2.19), X n (0) = S n (0) + (1/α)Z n (0), (6.1) in distribution, where as before, Z n (0) = E(Y 0 n |F n ). Notice that ES n (0) = 0.
The proposition above allows us to obtain different behavior of the fluctuations of the height at the origin in the biased and unbiased cases. The corollary below shows that in the unbiased case the fluctuations are sub diffusive (variance proportional to the square root of time) but in the biased case they are diffusive (variance proportional to the time).
Proof of Corollary. Since S n (0) and Z n (0) are uncorrelated, VX n (0) = (1/α) 2 VZ n (0) + VS n (0). We have shown in Corollary 3.4 that VZ n (0) is of the order of √ n in dimension one. Then, Proposition 6.1 shows that VX n (0) is of the order of VS n (0).
Proof of Proposition 6.1.
The first expectation on the right equals which is then seen to be since E(ȳ iȳj ) = E(ȳ i )E(ȳ j ) = 0 for all i = j and Eȳ 2 i = β 2 for all i. Similarly, the second expectation equals β 2 i<0 E[P 2 (Y 0 n ≤ i|F n )] (6.4) and the last one vanishes.
We get the following upper and lower bounds for E(S n (0)) 2 , (the bounds are based on P 2 (· · ·) ≤ E(P 2 (· · · |F n )) ≤ P(· · ·)) (6.6) respectively. When E j ju 1 (0, i) = 0, the random walk Y 0 n has zero mean and, since the variance is positive and finite, these bounds are both of order √ n. When E j ju 1 (0, i) = 0, the random walk Y 0 n is not centered and the bounds are of order n.

Continuous time
In this Section we discuss the continuous time process described in the introduction. Each site in Z d is provided with an exponential clock of rate 1. Every time it rings, the corresponding height jumps to a random convex combination of its own height and the heights of its neighboring sites. Let η t , t ≥ 0, denote this process. The way we choose to show the results for this process is to approximate it by a family of discrete time processes, similar to the discrete process we considered before. A direct approach should also work.
Let T (i, k) be the successive times of the Poisson process of site i, k ≥ 1 and let u k (i, ·) the k-th (independent) realization of the random vector u(i, ·). Fix as scaling parameter a positive integer N and define a family {β N n,i , n ≥ 1, i ∈ Z d } by Hence (β N n,i ) is a family of (i.i.d. Bernoulli) random variables with parameter P(T (i, 1) ≤ 1/N ) = 1 − exp(−1/N ). Let K n = n k=1 β N k,i . For a fixed initial configuration X 0 ∈ R Z d define the process (X N n ) n≥0 by Let F N n be the sigma algebra generated by {β k,i , u k (i, j), 1 ≤ k ≤ n, i ∈ Z d }. By Lemma 2.1 X N n (0) is described as the conditional expectation of a function of a random walk: where Y N,0,n k , k = 0, . . . , n, is the backward random walk starting at the origin and for 0 ≤ k ≤ n, Notice that {u N k (i, ·) : k ≥ 0, i ∈ Z d } is a family of iid vectors. As in most of the paper so far, we will concentrate attention on the case of initial inclined hyperplanes for the rest of the section. That is, we suppose X 0 (i) = iλ * for all i with λ a deterministic non-null vector of Z d .
The same arguments of Section 2 yield where M N (n) is the time spent at the origin in the time interval [0, n] of a (symmetric inhomogeneous at the origin) random walk D N t with transition probabilities and, recalling that u 1 (i, j) has the same distribution as u(i, j), where µ and σ 2 are defined in (1.5). It is a standard matter to prove that the process Y x,t s , s ∈ [0, t], defined by the almost sure coordinatewise limit , is a continuous time random walk with transition rates {u · (·, ·)}. This walk uses the Poisson marks and the realizations of the u backwards in time from t to 0. Using (7.1), we prove in Lemma 7.1 below that E(Y N,x, N t N t where F t is the sigma algebra generated by the Poisson processes and the realizations of the {u · (·, ·)} corresponding to the Poisson marks in the interval [0, t]. It is then easy to show that the limiting process exists, has generator (1.1) and Lemma 7.1. If Eη 0 (j) ≤ const. |j| 2 for all j then X N N t converges coordinatewise to η t in L 1 as N → ∞ for all fixed t. If E (η 0 (j) 2 ) ≤ const. |j| 2 for all j then there is also convergence in L 2 .
Proof. Fix x ∈ Z d . Consider the event Then (7.5) Notice that F N N t ⊂ F t for all t. The latter expectation can be written, disregarding constant factors, as and the argument for convergence in L 1 closes with the observation that (Y x,t t ) 2 and (Y N,x, N t N t ) 2 are uniformly integrable and P(A N t ) → 0 as N → ∞ for all fixed t, both of which assertions are not difficult to check. Convergence in L 2 follows the same steps, with the | · · · | expressions in (7.4-7.7) replaced by [· · ·] 2 . (The corresponding of inequality (7.4) follows by Jensen.) Now, by Lemmas 2.1 and 7.1, when η 0 ( where M(t) is the time spent in the origin up to t by a continuous time random walk (D t ) t≥0 with transition rates given by q(0, k) = j ν(u(0, j)u(0, j + k)) (7.10) and, for = 0 and k ∈ Z d , q( , + k) = νu(0, k) + νu(0, −k). (7.11) Now the asymptotic variance for the continuous case follows from the asymptotics of the number of visits of a continuous time symmetric random walk with an inhomogeneity at the origin. The asymptotics of Section 2 can be obtained also for the continuous time walk.
The convergence to an invariant measure follows for the continuous case similarly as above, by using the discrete time approximation and the fact that E(Y t | F t ) is a martingale. Here Y t is the limiting process obtained from Y N N t , the walk with transitions We have where the subscript x means that the corresponding random walk starts at x. Now the fact that the last quantity is bounded and the martingale property imply the convergence. The fact that the limit is invariant follows as in Proposition 5.2.
We leave the extension of the results of Sections 4, 5 and 6 to the reader, who may employ the approximation above.
Comparison with Liggett's results. In display (0.2) of Chapter IX, Liggett (1985) defines linear systems using families of random operators A z (x, y) and linear coefficients a(x, y). Our process corresponds to the choice a(x, y) = 0 for all x, y and by translation invariance. This implies that also Liggett's (1.5) is satisfied and his construction works for our case. We have proposed two somewhat simpler constructions. One of them, sketched in the introduction exploits the existence of a almost sure duality relation. The other is a standard passage from discrete to continuous time developed in Section 7.
It is also interesting to notice that when computing the two point correlation functions in the translation invariant case of his Theorem 3.1, a function called q t (x, y) there plays a fundamental role (see page 445). While in the general case y q t (x, y) is not expected to be one, in the RAP case q t (x, y) are the probability transition functions at time t of the random walk with rates q(x, y) given by (7.10) and (7.11).
His Theorem 3.17 can be applied to our process to yield weak convergence to constant positive configurations, when starting with translation invariant positive initial conditions. He does not treat initial conditions we considered in Theorem 5.1 (presumably because his work has a different perspective).
The nearest-neighbors case. Conservative nearest-particle systems. Consider d = 1 and u(i, j) = 0 if |i−j| > 1. In this case if the initial heights are ordered, that is η 0 (i) < η 0 (i+1) for all i, then the same will occur for later times. If we project the heights on a vertical line, we obtain a point process. We can interpret that at each event of this point process there is a particle. If η 0 (0) = 0, then the initial point process has a particle at the origin. The dynamics obtained by this projection can be described as follows. Each particle, after an exponential time chooses a random position between the neighboring particles and jump to it. Since the interaction occurs only with the two nearest-neighboring particles and the number of particles is conserved, we called this motion conservative nearest particle system. See Liggett (1985) for examples of (non conservative) nearest-particle systems.
To study the height at the origin η t (0) in the height system is equivalent to tagging the particle at the origin and following it in the particle system. The particle interpretation also allows us to study the projection of configurations that otherwise will be not localized. In particular, when u(i, i + 1) = 1 − u(i, i − 1) is a uniform random variable in [0, 1], any homogeneous Poisson process is reversible for the particle system, when we disregard the labels of the particles. To see this, notice that if X and Y are independent exponential random variables of the same parameter and u is uniform in [0, 1] and independent of the above random variables, then u(X + Y ) and (1 − u)(X + Y ) are again independent exponential random variables with the same parameter. The reversibility of the Poisson process for the particle system implies that the particle system as seen from the tagged particle has as reversible measure the Poisson process conditioned to have a particle at the origin. This in particular implies that if initially the differences between successive heights are iid exponentially distributed with some parameter ρ, then this distribution is reversible for the process as seen from the height at the origin.
Another consequence of this isomorphism between the particle process and the height system is that we get the asymptotic behavior for the motion of the tagged particle in the conservative nearest-particle system. The continuous time version of the Corollary to Proposition 6.1 implies that the motion of the tagged particle in the non biased case (µ = 0) is subdiffusive. That is, starting with the Poisson process, the variance of η t (0) will behave asymptotically as √ t. This corrects Theorem 9.1 in Ferrari (1996) which states wrongly that the behavior was diffusive.

Hydrodynamics
Let φ : R d → R be a continuous function. Let φ n : Z d → R be defined by φ n (i) = φ(i/n) If the distribution of u(i, j) is symmetric in the sense that u(i, j) and u(j, i) have the same distribution, then the following hydrodynamic limit holds: lim n→∞ E(X n 2 t (nr)|X 0 = φ n ) = Φ(r, t) where Φ is the solution of the heat equation with initial condition φ. In other words, Φ is the solution of where D h = j |j h |u(0, j), where j h is the h-th coordinate of the vector j ∈ Z d . To show the above one just computes the derivative of the h-th coordinate of EX n 2 t (rn) as follows lim n→∞ n 2 E(X n 2 t (rn) − X n 2 t−1 (rn)) h = lim n→∞ n 2 j E(u n 2 t (rn, rn + j)) h E(X n 2 t−1 (rn + j) − X n 2 t−1 (rn)) h = lim n→∞ n 2 j:j h >0 E(u n 2 t (rn, rn + j)) h E(X n 2 t−1 (rn + j) + X n 2 t−1 (rn − j) − X n 2 t−1 (rn)) h where (·) h is the h-th coordinate of the vector (·). When n → ∞ this gives the desired result.
Presumably both the law of large numbers and local equilibrium hold.
Using duality it is possible to prove the convergence to the diffusion equation for the case when the mean is zero but the distribution of the u is not symmetric. If the mean µ is different of zero, one proves that the process converges to a linear pde. In this case space and time are scaling in the same way.
We close this Section giving a comparison between the hydrodynamics of the η t process and the ξ t process studied by Kipnis, Marchioro and Presutti (1982). Consider the continuous time η t process in one dimension with nearest-neighbor interaction: u(i, i + 1) = 1 − u(i, i − 1) uniformly distributed in [0, 1] and u(i, j) = 0 otherwise. For this process the Poisson Processes of rate ρ on the line are invariant (and reversible) measures. The ξ t process is just defined by the differences: ξ t (i) = η t (i + 1) − η t (i) Since the Poisson process is invariant for the η t process, the measure defined as product of independent exponentials of parameter ρ is invariant for the ξ t process. Kipnis, Marchioro and Presutti (1982) consider this process but in a finite interval ρ L = {1, . . . , L} with the following boundary conditions: at the times of a Poisson process of rate 1, the value at site 1 is updated independently of everything by replacing whatever is in the site by an exponential random variable of mean ρ − ; at the times of a (independent of everything) Poisson process of rate 1, the value at site L is updated independently of everything by replacing whatever is in the site by an exponential random variable of mean ρ + . They studied the unique invariant measure around site rL, r ∈ [0, 1] and obtained that, as L → ∞, this measure converges to a product of exponentials with parameter ϕ(r) = rρ − + (1 − r)ρ + . This is called local equilibrium. The function ρ(s) is the solution of the heat equation The corresponding solution for the hydrodynamic limit of the corresponding η t process is a non equilibrium stationary profile growing with time: the solution of the equation This solution is Φ(r, t) = (ρ + − ρ − ) r 2 2 + rρ − + t(ρ + − ρ − ).

The Voter Model
In this section, we consider briefly the case where P(sup j u 1 (0, j) = 1) = 1, (9.1) which turns X n into the voter model. We will mention some (simple) results, without precise statements and no proofs, which we leave to the reader.
The dual of this model is even simpler than in the other cases (3.1), since it is X 0 on the positions of coalescing random walks. So (2.19) becomes X n (x) = X 0 (Y x,n n ), (9.2) in distribution, where {Y x,n k , x ∈ Z d , 0 ≤ k < n} is a family of coalescing random walks such that Y x,n 0 = x for all x ∈ Z d and with transition probabilities given by P(Y x,n k+1 = j|Y x,n k = i) = P(u 1 (i, j) = 1) (9.3) for all x ∈ Z d and 0 ≤ k < n.
As concerns the fluctuations of the height at the origin for the model starting as a hyperplane, we first notice that γ(0, 0), as defined in (2.26), equals 1 and, in consequence, so does P(D n = 0|D 0 = 0), with D n defined in (2.26). We conclude from Corollary 2.4 that in all dimensions V(X n (0)) = σ 2 n. (9.4) Thus, whenever σ 2 > 0 (that is whenever u is non deterministic), the height of the origin fluctuates as the square root of the time in all dimensions. This is a far departure from the other (non voter model) cases (see Corollary 3.5).
For more general initial conditions, the model is easier to study too, due to (9.2). For the case treated in Section 6, it is easy to derive (precise) fluctuations and a Central Limit Theorem too. We will not elaborate more.
Finally, as regards the model as seen from the height at the origin, it is well known from coalescing random walks that in 1 and 2 dimensions there is coalescence of all the random walks and thus the differences vanish (under any initial condition) almost surely in the dual. We conclude that the direct model becomes rigid in the limit (that is, the height differences vanish in probability).
In 3 or more dimensions, the dual model does not exhibit coalescence of all the random walks. Since non-coalescing walks behave forever as independent walks, their difference fluctuates as the square root of the time and thus does not converge in any sense. Therefore, the height differences do not converge in any dimension greater than or equal to 3, in another far departure from the non-voter model case (see results in Section 5).