On homogenization of elliptic equations with random coefficients

In this paper, we investigate the rate of convergence of the solution $u_\varepsilon$ of the random elliptic partial difference equation $(\nabla^{\varepsilon *} a(x/\varepsilon,\omega)\nabla^\varepsilon+1)u_\varepsilon(x,\omega)=f(x)$ to the corresponding homogenized solution. Here $x\in\varepsilon Z^d$, and $\omega\in\Omega$ represents the randomness. Assuming that $a(x)$'s are independent and uniformly elliptic, we shall obtain an upper bound $\varepsilon^\alpha$ for the rate of convergence, where $\alpha$ is a constant which depends on the dimension $d\ge 2$ and the deviation of $a(x,\omega)$ from the identity matrix. We will also show that the (statistical) average of $u_\varepsilon(x,\omega)$ and its derivatives decay exponentially for large $x$.


Introduction.
In this paper we shall be concerned with the problem of homogenization of elliptic equations in divergence form. Let (Ω, F, µ) be a probability space and a : Ω → R d(d+1)/2 be a bounded measurable function from Ω to the space of symmetric d × d matrices. We assume that there are positive constants λ, Λ such that in the sense of quadratic forms, where I d is the identity matrix in d dimensions. We assume that Z d acts on Ω by translation operators τ x : Ω → Ω, x ∈ Z d , which are measure preserving and satisfy the properties τ x τ y = τ x+y , τ 0 = identity, x, y ∈ Z d . Using these operators we can define a measurable matrix valued function on Z d × Ω by a(x, ω) = a(τ x ω), x ∈ Z d , ω ∈ Ω.
Let Z d ε = εZ d be the ε scaled integer lattice where ε > 0. For functions g : Z d ε → R we define the discrete derivative ∇ ε i g of g in the ith direction to be where e i ∈ Z d is the element with entry 1 in the ith position and 0 in other positions. The formal adjoint of ∇ ε i is given by ∇ ε * i , where We shall be interested in solutions of the elliptic equation, Here f : R d → R is assumed to be a smooth function with compact support and a ij (y, ω) are the entries of the matrix a(y, ω), y ∈ Z d .
It is well known [7,4] that, under the assumptions of ergodicity of the translation operators τ x , x ∈ Z d , the solution of (1.2) converges as ε → 0 to the solution of the homogenized equation, Here q = [q ij ] is a symmetric positive definite matrix determined from a(ω), ω ∈ Ω. If we denote expectation value on Ω by and then one has See [2] for extensions to unbounded, non-symmetric a's.
Our goal in this paper is to estimate the rate of convergence in the limit (1.4). To do this we shall need to make rather restrictive assumptions on the matrix a(·) beyond the uniform boundedness assumptions (1.1). We can however prove a result, just assuming (1.1), which is helpful for us in studying the rate of convergence in (1.4). To motivate it observe that since (1.3) is a constant coefficient equation it is easy to see that the solution is C ∞ and that for any n tuple α = (α 1 , ..., α n ) with 1 ≤ α i ≤ d, i = 1, ..., n, one has where δ > 0 can be chosen to be fixed and A α is a constant depending only on α and f . Consider now the problem of proving an inequality analogous to (1.5) for the expectation u ε (x, ·) of the solution u ε (x, ω) of (1.2). In view of the uniform boundedness (1.1) there exists δ > 0, independent of ε such that where C is a constant. To prove this one needs to use the deep theory of Nash [3]. One evidently can immediately conclude that sup x∈R d e δ|x| u ε (x, ·) ≤ C.
We can obtain a rate of convergence in (1.4) if we assume that the random matrix a(·) satisfies (1.1) and that the matrices a(τ x ·), x ∈ Z d , are independent. Our first theorem is as follows: Theorem 1.2. Suppose a(·) satisfies (1.1), the matrices a(τ x ·), x ∈ Z d , are independent, and γ = 1 − λ/Λ, |γ| < 1. Let u ε (x, ω) be the solution of (1.2) where f : R d → R is assumed to be a C ∞ function of compact support. Then for d ≥ 2 there is the inequality where α > 0 is a constant depending only on γ, and C only on γ and f . If d ≥ 3 then one can take α = 1 for sufficiently small γ > 0. For d = 2, α can be taken arbitrarily close to 1 if γ > 0 is taken sufficiently small.

Remark 1.
One can see by explicit computation that when d = 1, then α = 1/2. Theorem 1.2 is independent of the dimension d when d ≥ 3. We also have a theorem which is dimension dependent for all d. Theorem 1.3. Suppose a(·) and f satisfy the same conditions as in Theorem 1.2. Let g : R d → R be a C ∞ function of compact support. Then for d ≥ 2 there is the inequality, where β > 0 is a constant depending only on γ, and C only on γ, g and f . The number β can be taken arbitrarily close to d if γ > 0 is taken sufficiently small.

Remark 2. Theorem 1.3 only gives more information than Theorem 1.2 in the case when d ≥ 3.
We first prove Theorems 2 and 3 under the assumption that the a(x, ω), x ∈ Z d , ω ∈ Ω, are given by independent Bernoulli variables. This is accomplished in §4. We put where the random variables Y x , x ∈ Z d , are assumed to be iid Bernoulli, Y x = ±1 with equal probability. We must take γ in (1.6) to satisfy |γ| < 1 to ensure a(·) remains positive definite.
In §5 the method is extended to the general case. The methods can also be extended to deal with variables a(x, ·), x ∈ Z d , which are weakly correlated. To carry this out one must have, however, rather detailed knowledge on the decay of correlation functions.
To prove theorem 1.2 we use an idea from the proof of (1.4) in [7]. Thus we make an approximation u ε (x, ω) u(x)+ random term, where u(x) is the solution of (1.3). The random term is obtained from the solution Ψ(ω) of a variational problem on Ω (see lemma 2.2). The difference between u ε (x, ω) and the above approximation is estimated in proposition 2.1. One can obtain a new proof of the homogenization result (1.4) from proposition 2.1 by using the fact that Ψ is square integrable on Ω and applying the von Neumann ergodic theorem. For the proof of theorem 1.2 one needs to show that Ψ is p integrable for some p < 2. This p integrability is not in the conventional sense < |Ψ| p > < ∞. If a(x, ·) is given by (1.6), one expands Ψ in the orthonormal basis of Walsh functions for L 2 (Ω) generated by the Bernoulli variables Y x , x ∈ Z d . We say that Ψ is p integrable if the coefficients of Ψ in this basis are p summable. Evidently if p = 2 then p integrability of Ψ and < |Ψ| p > < ∞ are equivalent, but not if p is different from 2. We use the Calderon-Zygmund theorem [3] to show that Ψ is p integrable for some p < 2.
Observe that when a(x, ·) is given by (1.6) then the entries of the matrix a(·) generate a 2 dimensional subspace of L 2 (Ω). One can readily generalise the proof described in the previous paragraph to all random matrices a(·) whose entries generate a finite dimensional subspace of L 2 (Ω). The main task of §5 is to extend the argument to the situation where the entries of a(·) generate an infinite dimensional subspace of L 2 (Ω). To carry this out we introduce the Legendre polynomials P l (z) to give us an approximately orthogonal basis for the space generated by the entries of a(·). We use in a crucial way the fact that the ratio of the L ∞ norm of P l to the L 2 norm, on the interval [−1, 1], is polynomially bounded in l (in fact √ 2l + 1 ).
We cannot use proposition 2.1 to prove theorem 1.3. Instead, we have to make use of the Fourier representation for u ε (x, ω) given in §3. This representation appears to have considerable power.
To illustrate this we use it to prove theorem 1.1. The remainder of the proof of theorem 1.3 then follows along the same lines as the proof of theorem 1.2 The research in this paper was motivated by previous work of Naddaf and Spencer [6]. Let A : R → R d(d+1)/2 be a bounded measurable function from R to the space of real symmetric d × d matrices. Assume that there are positive constants λ, Λ, such that Let φ(x, ω), x ∈ Z d , ω ∈ Ω, be an Euclidean field theory satisfying the Brascamp-Lieb inequality [5]. The matrices a(x, ω), x ∈ Z d , ω ∈ Ω, are obtained by setting a(x, ω) = A(φ(x, ω)), x ∈ Z d , ω ∈ Ω .
Naddaf and Spencer prove that the results of Theorem 1.3 hold under the assumption that φ is a massive field theory and A has bounded derivative. They further prove that if γ is sufficiently small then one can take β = d. They also have corresponding results when φ is assumed to be massless.
The Russian literature [8,11] contains some previous work on the rate of convergence to homogenisation. This appears not to be rigorous.

Variational Formulation.
In this section we set out the variational formulation for the solution of (1.2) and for the effective Evidently The following lemma is then a consequence of the Banach-Alaoglu theorem [9]. Lemma 2.1. The functional G : H 1 (Z d ε × Ω) → R has a unique minimizer u ε ∈ H 1 (Z d ε × Ω) which satisfies the Euler-Lagrange equation, for all ψ ε ∈ H 1 (Z d ε × Ω). Next we turn to the variational formulation of the diffusion matrix q in (1.3). To do this we use the translation operators τ x on Ω to define discrete differentiation of a function on Ω. Let ϕ : Ω → R be a measurable function. For i = 1, ..., d we define ∂ i ϕ by The formal adjoint ∂ * i of ∂ i is given by The discrete gradient of ϕ, ∇ϕ is then a function ∇ϕ : Ω → R d given by ∇ϕ(ω) = (∂ 1 ϕ(ω), ..., ∂ d ϕ(ω)), ω ∈ Ω. For a function Ψ : Ω → R d let Ψ 2 denote the L 2 norm, We consider the linear space We have then again from Banach-Alaoglu: The matrix q = [q kk ] is given by In view of the fact that Ψ k = 0, k = 1, . . . , d, it follows that q is strictly positive definite. From the Euler-Lagrange equation one has the alternative expression, Next, let ∆ be the discrete Laplacian on Z d . Thus if u : Z d → R, then ∆u is defined by We denote by G η the Green's function for the operator −∆ + η, where η > 0 is a parameter. Thus and δ is the Kronecker δ function; δ(x) = 0 if x = 0, δ(0) = 1. For any Ψ ∈ H(Ω) we can use G η to define a function χ : Z d × Ω → R by the formula Proof. The fact that χ(x, ·) is in L 2 (Ω) follows easily from the exponential decay of G η . To prove (2.9) we use the relations (2.4). Thus from (2.4). Now a summation by parts yields which gives (2.9) on using (2.7).
The proof of homogenization [7,4] proceeds by writing the minimizer u ε in Lemma 2.1 approximately as where χ k is the function (2.8) corresponding to the minimizer Ψ k of G k in Lemma 2.2. Clearly the parameter η must be chosen appropriately, depending on ε. Now let Z ε (x, ω) be the RHS of (2.10) minus the LHS, and ψ ε be an arbitrary function in H 1 (Z d ε × Ω). Then from Lemma 2.1 we have that The first two terms on the RHS of the last equation can be rewritten as, The first term in the last expression can be rewritten as Observe next that Since ψ ε ∈ H 1 (Z d ε × Ω) and u(x) can be assumed to be converging exponentially to zero as |x| → ∞, it follows that Φ i,j ∈ L 2 (Ω). Defining Φ j ∈ L 2 (Ω) by it is clear that We conclude now from Lemma 2.2 that (2.14) is the same as Hence the first term in (2.13) is the sum of (2.15) and Now, let us define Q ij ∈ L 2 (Ω) by where q ij is given by (2.5). It follows from (2.6) that Q ij = 0. Furthermore, from (2.15), (2.16), we see that the first term in (2.13) is the same as We can do a similar integration by parts for the second expression in (2.13). Thus, We have that Once again it is clear that Φ i,j,k ∈ L 2 (Ω). Integrating by parts we conclude that Evidently we have Q ijk = 0. Next we take account of the second term in (2.18). Thus we have Evidently R ijk = 0. Hence the second term in (2.13) is the same as Now let us assume that u(x) satisfies the partial difference equation, This equation converges as ε → 0 to the homogenized equation (1.3). Note that it is a singular perturbation of (1.3) and therefore needs to be carefully analyzed. In particular, we shall have to show that u(x) and its derivatives converge rapidly to zero as |x| → ∞. It follows now from (2.12), (2.13), (2.17), (2.19), that Proposition 2.1. Let Z ε be defined by (2.11). Then where C is a constant and Proof. Let g = g(x, ω) be a function in L 2 (Z d ε × Ω). Denote by G g the functional (2.2) on H 1 (Z d ε × Ω) obtained by replacing the function f (x) with g(x, ω) in (2.2). Then, according to Lemma 2.1 there is a minimizer of G g in H 1 (Z d ε × Ω) which we denote by ψ ε . It is clear that ψ ε H 1 ≤ C g 2 for some constant C, where g 2 is the L 2 norm of g. Furthermore, if we assume the solutions of (2.20) are rapidly decreasing then it follows that Z ε is in H 1 (Z d ε × Ω). The Euler-Lagrange equations for ψ ε as given by Lemma 2.1 then tells us that the LHS of (2.21) is the same as Next we consider the RHS of (2.21). If we use the Schwarz inequality on the sixth expression it is clear it is bounded by C ψ ε H 1 A respectively. To obtain bounds on the first three terms we need to represent ψ ε as an integral of its gradient. We can do this by using the Green's function G 0 which is the solution of (2.7) when η = 0. To see this observe that Using (2.7) this is the same as The first term on the RHS of (2.21) is therefore the same as It follows again from the Schwarz inequality that this last expression is bounded by C g 2 A 1/2 1 . A similar argument shows that the second and third terms on the RHS of (2.21) are bounded by respectively. The result follows now by taking g = Z ε . Proposition 2.1 will help us obtain an estimate on the variance of the minimizer u ε of Lemma 2.1. In fact there is the inequality We shall also want to estimate the variance of the random variable where g : R d → R is a C ∞ function with compact support. To help us do this we reformulate the variational problem of Lemma 2.
Now, using the fact that ϕ ε (x, ·) = 0 we conclude We have then from the Schwarz inequality Using the fact that we have The Euler-Lagrange equation follows in the usual way. To verify that u ε (x, ·) = u(x) + εψ ε (x, τ x/ε ·) is the minimizer of Lemma 2.1 we need only observe that, with this substitution, the Euler-Lagrange equation here is the same as that of Lemma 2.1.
If we let ε → 0 in the expression (2.25) for F ε (u, ψ ε ) we formally obtain a functional F 0 (u, ψ) given by Consider now the problem of minimizing F 0 (u, ψ). If we fix x ∈ R d and minimize over all ψ(x, ·) ∈ L 2 (Ω) then by Lemma 2.2 the minimizer is given by where the Ψ k ∈ H(Ω) are the minimizers of Lemma 2.2. If we further minimize with respect to the function u(x) then it is clear that u(x) is the solution of the pde (1.3), where the matrix q is given by (2.6). Hence in the minimization of F 0 we can separate the minimization problems in ω and x variables. The function ψ defined by (2.26) has, however, no longer the property that ψ(x, ·) ∈ L 2 (Ω).
We wish now to defined a new functional, closely related to F ε as ε → 0, which has the property that the minimization problems in ω and x variables can be separated and also that the minimizer It is clear that the formal limit of F S,ε as ε → 0 is, like the formal limit of F ε , given by F 0 . The advantage of F S,ε over F ε is that the minimization problem separates. We shall prove this in the following lemmas. First we have the analogue of Lemma 2.4: The minimizer satisfies the Euler-Lagrange equation, Proof. Same as Lemma 2.4.
Next we need an analogue of Lemma 2.2. For ε > 0 define a functional G k,ε : Lemma 2.6. The functional G k,ε : L 2 (Ω) → R has a unique minimizer ψ k,ε ∈ L 2 (Ω) which satisfies ψ k,ε (·) = 0 and the Euler-Lagrange equation, Next, in analogy to (2.5) we define a matrix q ε by where the ψ k,ε , k = 1, ..., d are the minimizers of Lemma 2.6. Evidently q ε is a symmetric positive definite matrix. Using the Euler-Lagrange equation we have a representation for q ε kk analogous to (2.6), namely Proof. Standard.
is the minimizer of Lemma 2.5.
Proof. Since u(x) satisfies (2.30) it follows that u(x) and its derivatives decrease exponentially as |x| → ∞. Hence u ∈ H 1 (Z d ε ). Since also ψ k,ε ∈ L 2 (Ω) and ψ k,ε = 0 it follows that To show that (u, ψ ε ) is the minimizer for the functional of Lemma 2.5 it will be sufficient to show (u, ψ ε ) satisfies the Euler-Lagrange equation (2.28). Using the Euler-Lagrange equation of Lemma 2.6 we see that the LHS of (2.28) is the same as If we use now (2.29) we see this last expression is the same as In view of (2.30) this last expression is zero.

Analysis in Fourier Space
In this section we apply Fourier space methods to the problems of section 2. First we shall be interested in finding a solution to the partial difference equation (2.20), where f : R d → R is a C ∞ function with compact support. If we let ε → 0 in (2.20) then we get formally the singular perturbation problem, Evidently this is a singular perturbation of the homogenized equation (1.3). It is easy to see that this equation has a solution u(x), all of whose derivatives decrease exponentially fast to zero as |x| → ∞. Furthermore this decrease is uniform in ε as ε → 0. Now let us write equation (2.20) as where the operator L ε is given by the LHS of (2.20). In contrast to the case of (3.1) we cannot assert that (3.2) has a solution in general. We can however assert that it has an approximate solution.
and A α is independent of ε.
where γ > 0 can be chosen arbitrarily large and the constant C is independent of ε.
Proof. We go into Fourier variables. For u : Z d ε → C, the Fourier transform of u is given by The function u can be obtained from its Fourier transform by the formula,

The equation (3.2) is given in Fourier variables by
We can rewrite this aŝ Since the matrix q is positive definite it is clear that provided ε|ξ| 1 then the coefficient of u(ξ) in the last expression is non-zero. On the other hand if ε|ξ| = O(1) then this coefficient could become zero. To get around this problem we define a function u(x) by In this last expression N is a positive integer and A a positive constant chosen large enough, Since f is C ∞ with compact support it follows thatf is analytic. In particular, for any positive integer m and δ > 0 there exists a constant C m,δ , independent of ε, such that We have therefore that u(x) is given by the formula, Now if A is large and δ > 0 is taken sufficiently small it is easy to see that the modulus of the coefficient ofû(ξ + iη) on the LHS of (3.4) is strictly positive for ξ ∈ [−π/ε, π/ε] d , |η| < δ. It follows then from (3.5), (3.6) that part (a) of proposition 3.1 holds.
In view of (3.5) it follows that there is a constant C, independent of ε, such that Part (b) follows from this last inequality.
Next we wish to rewrite the functional F ε of lemma 2.4 in Fourier variables. First observe that if we define the spaceĤ 1 ([ −π ε , π ε ] d ) as the set of all functionsû : Similarly we can define a spaceĤ 1 are unitarily equivalent via the Fourier transform, It is also clear that the functional F ε defined by (2.25) satisfies . We now follow the development of lemma 2.4 through proposition 2.2, but in Fourier space variables. First we have: .
Observe that the functional G ξ,k,ε at ξ = 0 is identical to the functional G k,ε of lemma 2.6. The following lemma corresponds to lemma 2.6.
Making the substitution (3.11) forψ ε in (3.9) we see that (3.9) is the same as We can obtain an alternative expression for q ε kk (ξ) by using the Euler-Lagrange equation, (3.10). We have It is clear from this last expression that the matrix q ε (ξ) = [q ε kk (ξ)] is Hermitian, non-negative definite. In view of the fact that ψ k,ε (ξ, ·) = 0 it follows that it is bounded below by the matrix λI d , where λ is given by (1.1). Hence the equation (3.12) can be solved uniquely forû(ξ) in terms off (ξ).
Suppose now that we know the minimizerψ k,ε (ξ, ·) of Lemma 3.2 is continuous as a function Hence if we defineû(ξ) by (3.12) then it is easy to see that u(ξ) is continuous and Since we are assuming f is C ∞ of compact support it follows thatf is in . We conclude thatû(ξ),ψ ε (ξ, ·) defined by (3.12) and (3.11) are the unique solution to the variational problem of lemma 3.1.
The operator T k,ε,ξ is defined by putting ψ = T k,ε,ξ ϕ. Let denote the usual L 2 norm and for It is easy to see that T k,ε,ξ is a bounded operator from L 2 (Ω), equipped with the standard norm, to L 2 0 (Ω), equipped with the norm ε,ξ , and that the operator norm of T k,ε,ξ satisfies T k,ε,ξ ≤ 1.
We can rewrite the Euler-Lagrange equation (3.10) using the operators T k,ε,ξ . First let b(·) be the random matrix defined by Substituting for a(·) in terms of b(·) into (3.10) yields then the equation, We define an operator T b,ε,ξ on L 2 0 (Ω) by where b(·) is an arbitrary random real symmetric matrix. We define b to be Thus b is the maximum over ω ∈ Ω of the spectral radii of b(ω). It is easy to see now that T b,ε,ξ is a bounded operator on L 2 0 (Ω) equipped with the norm ε,ξ and that the corresponding operator norm satisfies T b,ε,ξ ≤ b .
Our goal now is to show that the operators T k,ε,ξ and T b,ε,ξ can be analytically continued from ξ ∈ R d to a strip {ξ + iη : ξ, η ∈ R d , |η| < α} in C d . Furthermore, the norm bounds we have obtained continue to approximately hold. Lemma 3.3. (a) Assume L 2 (Ω) is equipped with the standard norm and let B(L 2 (Ω)) be the corresponding Banach space of bounded operators on L 2 (Ω). Then there exists α > 0, independent of ε, such that the mapping ξ → T k,ε,ξ from R d to B(L 2 (Ω)) can be analytically continued into the region {ξ + iη ∈ C d : ξ, η ∈ R d , |η| < α}. (b) For ξ, η ∈ R d , |η| < α consider T k,ε,ξ+iη as a bounded operator from L 2 (Ω), equipped with the standard norm, to L 2 0 (Ω), equipped with the norm ε,ξ . We denote the corresponding operator norm also by ε,ξ . Then, for α sufficiently small, independent of ε, there exists a constant C α , depending only on α, such that Proof. We can write down the solution of (3.15) by using a Green's function. Thus, in analogy to (2.7), let G ε,ξ be the solution of the equation, Then T k,ε,ξ is given by The function G ε,ξ (y) decays exponentially as |y| → ∞. Hence the RHS of (3.19) is in L 2 0 (Ω). It is a simple matter to verify that ψ = T k,ε,ξ φ, defined by (3.19), satisfies (3.15).
To prove part (a) we need to analyze the solution of (3.18). To do this we go into Fourier variables. Thus ifĜ Taking the inverse Fourier transform we reconstruct G ε,ξ fromĜ ε,ξ by the formula, Observe now that there exists α > 0, independent of ε, such thatĜ ε,ξ (ζ), regarded as a function of ξ and ζ, is analytic in the region {(ξ, ζ) ∈ C 2d : |Im ξ| < α, |Im ζ| < εα}. From (3.20), (3.21) we have that By deforming the contour of integration in ζ in (3.21) into the complex space C d we see that for every x ∈ Z d the function [e iεe k ·ξ ∇ k + e iεe k ·ξ − 1]× G ε,ξ (x) is analytic in ξ for ξ ∈ C d with | Im ξ |< α. Furthermore there is a universal constant C such that Note that a similar inequality for G ε,ξ (x) holds if d ≥ 3 but not if d = 2. Part (a) follows now since it is clear that the RHS of (3.19) is analytic if we have a finite summation instead of the sum over all of Z d . The inequality (3.22) gives uniform convergence in the norm of B(L 2 (Ω)), whence the result of part (a).
We turn to the proof of part (b). We have from (3.19) that where Γ is the correlation function, and the h, h j are given by Now Γ(n) is a positive definite function. Hence by Bochner's theorem [9] there is a finite positive measure dµ φ on [−π, π] d such that and, It follows that From (3.20) , (3.23) we have that It is easy to see that we can choose α sufficiently small, independent of ε, such that this last expression is less that C|η| 2 for all ζ ∈ [−π, π] d , |η| < α, where the constant C is independent of ε. The result follows.
where the matrix b has b ≤ 1 − λ/Λ. In view of lemma 3.3 and corollary 3.1 there exists α > 0, independent of ε, such that the equation T j,ε,ξ+iη (a kj (·)) = 0, has a unique solutionψ k,ε (ξ + iη, ·) ∈ L 2 0 (Ω), provided ξ, η ∈ R d , |η| < α. Further, α can be chosen sufficiently small, independent of ε, such that where the constant C α is independent of ε. It is easy to see that the functionψ k,ε (ξ + iη, ·) is the analytic continuation ofψ k,ε (ξ, ·), ξ ∈ R d . In fact we just write the solution of (3.24) as a perturbation series. A finite truncation of the series is clearly analytic in ξ + iη ∈ C d . Now we use the fact that lemma 3.3 and corollary 3.1 gives us uniform convergence in the standard norm on L 2 0 (Ω) to assert the analyticity of the entire series. This proves parts (a) and (b).
To prove part (c) we use the representation (3.13) for q ε (ξ). Thus Hence from the Schwarz inequality, where the constant C depends only on α and the uniform bound Λ on the matrix a(·). The result follows now from (3.25).
Proof of Theorem 1.1. From proposition 3.2 there exists α > 0 such that the matrix q ε (ξ) has an analytic continuation into the region ξ + iη : |η| < α . From (3.12) we have then thatû(ξ) can also be analytically continued into this region. The result follows now by using the deformation of contour argument of proposition 3.1, the fact that q ε (ξ) is bounded below as a quadratic form by λI d and part (c) of proposition 3.2.

A Bernoulli Environment.
In this section we consider a situation in which the random matrix a : Ω → R d(d+1)/2 is generated by independent Bernoulli variables. For each n ∈ Z d let Y n be independent Bernoulli variables, whence Y n = ±1 with equal probability. The probability space (Ω, F, µ) is then the space generated by all the variables Y n , n ∈ Z d . A point ω ∈ Ω is a set of configurations {(Y n , n) : n ∈ Z d }. For y ∈ Z d the translation operator τ y acts on Ω by taking the point The random matrix a is then defined by a(ω) def.
For each y ∈ Z d we may define a translation operator τ y on Z d,N by We can then define the convolution of two functions ψ N , ϕ N : Z d,N → R. This is a function ψ N * ϕ N : Z d → R given by If N = 1 this is just the standard discrete convolution. We have the following generalization of Young's inequality: Proposition 4.1. Suppose p, q satisfy 1 ≤ p, q ≤ ∞ and 1 p + 1 Proof. We follow the standard procedure. If r = ∞ the result follows from Hölder's inequality applied to (4.1). For p = 1 we have from Hölder's inequality that Now if we sum this last inequality over y ∈ Z d we get the inequality (4.2) with r = q. For the general case we have where α, β are given by rα = p, rβ = q. The result follows from this last inequality by summing over y ∈ Z d .
Arguing as in Proposition 4.1 we have a version of Young's inequality for this situation.
The point in defining the Fock spaces F p (Z d ) here is the fact that F 2 (Z d ) is unitarily equivalent to L 2 (Ω). In fact if ψ = {ψ N : N = 0, 1, 2, ...} with ψ 0 ∈ R and ψ N : Z d,N → R we can define a function Uψ on Ω by  random matrix a(ω) is given by a(ω) is the minimizer for the corresponding variational problem as given in Lemma 2.2. Then there exists p, 1 < p < 2, depending on γ such that Ψ k i ∈ L p (Ω), i = 1, ..., d. The number p can be chosen arbitrarily close to 1 provided γ > 0 is taken sufficiently small.
Proof. Writing Ψ i = ∂ i Φ, i = 1, ..., d and assuming Φ is an arbitrary function in L 2 (Ω) it is clear that the Euler-Lagrange equation in Lemma 2.2 is the same as Thus if we can find Ψ k ∈ H(Ω) satisfying (4.4) then Ψ k is the unique solution to the variational problem of Lemma 2.2.
For any k = 1, . . . , d we define an operator T k : L 2 (Ω) → H(Ω) as follows: Suppose Φ ∈ L 2 (Ω). Then, in analogy to our derivation of (4.4) we see that there is a unique Ψ ∈ H(Ω) such that We put Ψ = T k Φ. It is easy to see that T k is a bounded operator with T k ≤ 1. Next, for k = 1, ..., d and η > 0 we define an operator T k,η : L 2 (Ω) → H(Ω) as follows: Suppose Φ ∈ L 2 (Ω). Then by using the variational argument of Lemma 2.2 one sees that there is a unique Φ η ∈ L 2 (Ω) such that We put ∇Φ η = T k,η Φ. It is again clear that T k,η is a bounded operator and T k,η ≤ 1.
We can obtain a representation for the solution Φ η of (4.6) with the help of the Green's function G η of (2.7). It is easy to see that is in L 2 (Ω) and satisfies (4.6). From (4.5), (4.6), we see that for Φ ∈ L 2 (Ω), T k,η Φ converges weakly to T k Φ in H(Ω) as η → 0 provided the corresponding function Φ η defined by (4.7) satisfies ηΦ η → 0 weakly in L 2 (Ω). This last fact follows from the ergodicity of the translation operators τ y . One sees this by going to the spectral representation of the τ y [9].
We consider now the case of the a(ω) in the statement of Proposition 4.2. In this situation (4.4) can be rewritten as T j (a kj ) = 0.

Define T : H(Ω) → H(Ω) by the formula,
Since the operation of multiplication by Y 0 is unitary on L 2 (Ω) it follows that T is bounded and in fact T ≤ 1. Hence (4.4) is equivalent to solving the equation T j (a kj ) = 0.
Since T ≤ 1 it is evident this equation has a unique solution provided |γ| < 1. We can for η > 0 define an operator T η : H(Ω) → H(Ω) in analogy to the definition of T by It is clear that T η is bounded with T η ≤ 1. Hence if |γ| < 1 there is a unique solution to the equation Furthermore the solution Ψ k,η of (4.9) converges weakly to the solution Ψ k of (4.4) as η → 0 in H(Ω).
For 1 ≤ p < ∞ we define the spaces H p (Ω) as follows: Let E p = {∇ϕ : ϕ ∈ L p (Ω)}. For Ψ ∈ E p we define the norm of Ψ, Ψ p to be given by Ψ p The Banach space H p (Ω) is the closure of E p in this norm. Evidently H 2 (Ω) is the same as H(Ω). We can show that there exists p < 2, depending on γ < 1 such that (4.9) has a solution in H p (Ω). To see this observe that Now the RHS of this last equation is just a singular integral. In fact if ϕ = {ϕ N : N = 0, 1, 2, ...} is in L p (Ω) then the function ψ defined by is given by ψ = {ψ N : N = 0, 1, 2, ...} where ψ 0 = 0 and for N ≥ 1, one has Thus ψ N is the convolution of a second derivative of a Green's function with ϕ N . We can therefore invoke the Calderon-Zygmund theorem [10] to conclude the following: Let 1 < p < ∞.
In the last statement we are using the fact that T 2 ≤ 1 and the continuity of the operator norms in p. Hence there exists p < 2 such that for all sufficiently small η the equation (4.9) has a unique solution Ψ k,η in H p (Ω). It also follows from the above that as η → 0, Ψ k,η converges in H p (Ω) to a function Ψ k ∈ H p (Ω). Since H p (Ω) ⊂ H 2 (Ω) = H(Ω) it follows that this Ψ k is also the solution of the variational problem. This proves the first part of Proposition 4.2. The fact that p can be taken arbitrarily close to 1 for sufficiently small γ is clearly also a direct consequence of the Calderon-Zygmund theorem.
where α > 0 is a constant depending only on γ and C is independent of ε. If γ is sufficiently small one can take α = 1.

Proof. From Proposition 2.1 it is sufficient to obtain estimates on
We first consider A 1 . In view of the boundedness of the matrix a(ω) and Proposition 4.2 it follows that Q ij (ω) is in L p (Ω) for some p, 1 < p < 2, i, j = 1, ..., d. Taking the expectation value and using translation invariance we see that A 1 is given by the expression, where convolution is defined by (4.3). Making the change of variable n = y − y , this becomes We can estimate the summation with respect to x from our knowledge of the properties of G 0 . We conclude that there is a constant C independent of ε such that where C, δ > 0 are independent of ε. In this last inequality we have used Proposition 3.1. From Corollary 4.1 it follows that Q ij * Q ij ∈ L r (Z d ) where 1/r = 2/p − 1. Since 1 < p < 2 one has 1 < r < ∞. Hence from Hölder's inequality we conclude that , where 1/r + 1/r = 1. Evidently this yields Since r < ∞ it follows that A 1 ≤ Cε 2α for some α > 0. If γ is sufficiently small we can choose p close to 1 and hence r close to 1. Since d > 2 it follows that A 1 ≤ Cε 2 in this case.
It is clear that A 2 and A 3 can be dealt with in exactly the same way as A 1 . To deal with A 4 we choose η = ε 2 . Arguing as before we have Again we see from known properties of Green's functions that Using these inequalities and arguing as before we see that A 4 ≤ Cε 2α with α > 0 and α = 1 if γ is sufficiently small.
To deal with A 5 and A 6 we use the representation, and argue as previously.
For d = 2 we can get almost the same result as in Proposition 4.3. We have Proof. We consider A 1 again. The problem is that when d = 2 the summation with respect to x in (4.11) gives infinity. To get around this we replace G 0 in the representation (2.23) of the first term on the RHS of (2.21) by G η with η = ε 2 . Hence from (2.7), (2.22), we have We can rewrite this as The first term on the RHS of (2.21) is therefore the same as If we use the Schwarz inequality in (4.12) we see the first term is bounded by C ψ ε H 1 A We proceed now as in Proposition 4.3. Thus A 1,0 can be written as It is clear now that there are constants C, δ > 0 such that Arguing as before we see that A 1,0 ≤ Cε 2α for some α > 0 and α can be chosen arbitrarily close to 1 if γ is small. We have a similar representation for A 1,1 given by where in this case h ij is given by The estimate now on h ij is We conclude again that A 1,1 ≤ Cε 2α for some α > 0 and α can be chosen arbitrarily close to 1 if γ is small.
The second and third terms on the RHS of (2.21) can be dealt with exactly the same way we dealt with the first term. The fourth, fifth and sixth terms are handled in the same way we handled A 4 , A 5 , A 6 in Proposition 4.3.

Proof of Theorem 1.2. In view of Proposition 4.3, 4.4 and the inequality (2.24), it is sufficient for us to estimate
Since this quantity is bounded by dA 6 We shall first prove a result when ψ ε is the minimizer for the separable problem given in Lemma 2.5.
for some α > 0, provided |γ| < 1. The number α can be taken arbitrarily close to d provided γ is taken sufficiently small.
Proof. From Proposition 2.2 it is sufficient for us to bound where ψ k,ε (·) is the minimizer of Lemma 2.6. This last expression is the same as We shall see in the next lemma that εψ k,ε ∈ L p (Ω) for some p, 1 < p < 2, and that εψ k,ε p ≤ C where C is independent of ε as ε → 0. Further, p can be taken arbitrarily close to 1 for sufficiently small γ > 0. The result follows easily from this and Young's inequality, Corollary 4.1. Since εψ k,ε is in L p (Ω) it follows that ε 2 ψ k,ε * ψ k,ε is in L r (Z d ) with 1/r = 2/p − 1, and ε 2 ψ k,ε * ψ k,ε r ≤ C where C is independent of ε. Hence from Hölder's inequality the LHS of (4.13) is bounded by C ε d h r , where 1/r + 1/r = 1.
It is easy now to see from (4.14) that h r ≤ Cε −d/r . Note that r > 1 since p < 2, hence r < ∞.

Lemma 4.1. Suppose the random matrix a(ω) is given by a(ω)
and ψ k,ε (·) is the minimizer for the corresponding variational problem as given in Lemma 2.6. Then there exists p, 1 < p < 2, depending on γ such that εψ k,ε ∈ L p (Ω) with εψ k,ε p bounded independent of ε as ε → 0. The number p can be chosen arbitrarily close to 1 provided γ > 0 is taken sufficiently small.
Proof. Observe that the result is trivial for p = 2. In fact we have To prove the L p result with p < 2 we proceed similarly to Proposition 4.2. First note that the Euler-Lagrange equation of Lemma 2.6 is the same as If we can find ψ k,ε ∈ L 2 (Ω) satisfying (4.15) then ψ k,ε is the unique solution to the variational problem of Lemma 2.6.

More General Environments.
In this section we shall show how to generalize the methods of §4 to prove Theorem 1.
and a random variable Y 0,s = b s − b s . For s ∈ S, n ∈ Z d we define Y n,s as the translate of Y 0,s . Thus Y n,s (·) = Y 0,s (τ n ·). It follows from our assumptions that the variables Y n 1 ,s 1 , Y n 2 ,s 2 are independent if n 1 = n 2 . They are not necessarily independent if n 1 = n 2 . We can think of the extra index s on the variable Y n,s as denoting a spin. We are therefore led to define a Fock space F p S (Z d ) of many particle functions where the particles move in Z d and have spin in S. Thus ψ ∈ F p S (Z d ) is a collection of N particle functions ψ = {ψ N : N = 0, 1, 2, ...} where ψ 0 ∈ R and The norm on F p S (Z d ) is given as before by For s = ∪ m k=1 {i k , j k } ∈ S, let |s| = m. Given a parameter α > 0 we define a mapping U α from F 2 S (Z d ) to functions on Ω by Lemma 5.1. For any α > 0 the number γ can be chosen sufficiently small so that U α is a bounded operator from F 2 S (Z d ) to L 2 (Ω) and U α ≤ 1.
Proof. Since Y 0,s = 0, s ∈ S, we have The result follows now by taking δ small enough so that s∈S δ 2|s| ≤ 1.
Our first goal here will be to prove an analogue of Proposition 4.2. Let T b : H(Ω) → H(Ω) be the operator defined by where the operators T i are given by (4.5). It is clear that T b ≤ γ and that the minimizer Ψ k of Lemma 2.2 is the unique solution to the equation For each y ∈ Z d we can define a translation operator τ y on F 2 S (Z d ) as follows: It is clear that τ y U α = U α τ y , y ∈ Z d , α > 0. Just as in §2 we can use the translation operators to define derivative operators We define the space of gradients of functions in F 2 S (Z d ), which we denote by H 2 S (Z d ), in analogy to the definition of H(Ω). Thus for ϕ ∈ F 2 S (Z d ) let ∇ϕ be the gradient (∂ 1 ϕ, ..., ∂ d ϕ). Then H 2 S (Z d ) is the completion of the space of gradients {∇ϕ : ϕ ∈ F 2 S (Z d )} under the norm 2 , We wish to define an operator T b,α,F on H 2 S (Z d ) which has the property that if T b is the operator of (5.2) then These are defined exactly as in (4.5). Thus, note that for Φ ∈ F 2 We can see this by a variational argument as previously. We put then Ψ = T k,F Φ and it is easy to see that T k,F is bounded with T k,F ≤ 1. It is also clear that U α T k,F = T k U α , where T k is defined by (4.5).
Next we need to define analogues of the multiplication operators b ij (·), 1 ≤ i, j ≤ d. For any pair (i, j) with 1 ≤ i, j ≤ d and α > 0 define an operator B i,j,α :  n 1 , s 1 , ..., n N , s N ) N (n 1 , s 1 , ..., n N , s N ϕ N +1 (0, s, n 1 , s 1 , ..., n N , s N ), s 1 , n 2 , s 2 , ..., n N , s N s 1 , n 2 , s 2 , ..., n N , s N N (0, s 1 \{i, j}, n 2 , s 2 , ..., n N , s N ), if n k = 0, 2 ≤ k ≤ N, and {i, j} strictly contained in s 1 ; s 1 , n 2 , s 2 , ..., n N , s N We wish to obtain an equation in H 2 S (Z d ) which corresponds to the equation ( It is easy to see that we have the following: Lemma 5.2. (a) The number γ can be chosen sufficiently small so that for some α > 1 the operator T b,α,F is a bounded operator on H 2 S (Z d ) with norm strictly less than 1.

(b) Suppose γ, α have been chosen so that part (a) holds and also Lemma 5.1. Then if Ψ k is the unique solution to (5.4) the function U α Ψ k is in H(Ω) and satisfies (5.3).
We can use the same method of proof as in Proposition 4.2 to prove the corresponding analogue for the solution of (5.4).  N = 0, 1, 2, ...} with ψ 0 = 0. Assume γ > 0 is chosen small enough so that U α ψ ∈ L 2 (Ω). Let g : Z d → R be the function g(n) = U α ψ(·)U α ψ(τ n ·) , n ∈ Z d . Then γ > 0 can be chosen small enough so that g ∈ L r (Z d ) where 1/r = 2/p − 1.
Proof. Similarly to Lemma 5.1 we have that 1≤i≤N   |ψ N (n 1 , s 1 , ..., n N , s N N (n 1 , s 1 , ..., n N , s N ) where δ > 0 can be taken arbitrarily small. We now use the method of Proposition 4.1 to finish the proof.
The previous three lemmas can be used to prove part of Theorem 1.2, the case when γ may be taken arbitrarily small. This is done by simply following through the corresponding proofs for the Bernoulli case given in §4. Next we wish to consider the case of Theorem 1.2 when γ is assumed only to be strictly less than 1. First we shall deal with the case where the variables b s (·), s ∈ S, are finitely generated. This means there exist variables Y k (·), k = 0, 1, This was the situation in § 4 where we could take M = 1.
We proceed now as before, taking our spin space S to be the set of integers {1, 2, ..., M }. Letting Y 0,s = Y s , s = 1, ..., M , we may define as before the spaces F p S (Z d ) and the transformation U 1 . It is clear that the following holds: Lemma 5.5. U 1 is a bounded operator from F 2 S (Z d ) to L 2 (Ω) and U 1 ≤ 1.

Next we define an operator T
where the operators T i,F are defined as before. It is easy to see that T F is bounded with the function U 1 Ψ k ∈ H(Ω) is the unique solution to (5.3). It is clear from Lemma 5.6 that the following holds: is the solution of (5.6). Then there exists p, 1 < p < 2, depending only on γ < 1 such that Ψ k i ∈ F p S (Z d ), 1 ≤ i ≤ d. Theorem 1.2, in the case when γ is close to 1, follows from lemma 5.7 just as before. To complete the proof of Theorem 1.2, in the case when γ is close to 1, we need to deal with the situation where the variables b i,j (·) are not finitely generated. To do this let V 0 be the subspace of L 2 (Ω) generated by the constant function. For k ≥ 1 we define the linear space V k inductively as the span of the spaces V k−1 and b i, be an orthonormal set of variables in L 2 (Ω) with the property that Y 0 ≡ 1 and V k is spanned by the variables Y k , 0 ≤ k ≤ k, k = 0, 1, 2, ... . For k = 0, 1, 2, ... we write Let S be the set of integers {1, 2, ...} and F p S (Z d ) be the corresponding Fock space defined as before. For s ∈ S let the modulus of s, |s| = s. Then for any α > 0 we can define the mapping U α from F 2 S (Z d ) to functions on Ω by (5.1). Let B α be the operator on F 2 S (Z d ) with the property that Then B α is given as follows: For ϕ = {ϕ N : N = 0, 1, 2, ...} we put B α ϕ = ψ = {ψ N : N = 0, 1, 2, ...}, where ψ is defined in terms of ϕ by, Proof. For k = 0, 1, 2... let λ k be real parameters. Then it follows from (5.7) that Observe now from (5.8) that if n j = 0, 1 ≤ j ≤ N , then We can use (5.9) to obtain a bound on the RHS of (5.10) when p = 2. To do this let g α (k) be defined for k = 0, 1, 2, ... by If we use Hölder's inequality this last inequality implies that for any p, 1 < p < 2, one has If we sum this last inequality with respect to r and average over n, 0 ≤ n ≤ N , we obtain the inequality, This last inequality together with (5.12) imply that we can choose α, p 0 , 0 < α < 1, such that the RHS of (5.10) is bounded by To see this we need to see how to choose the constants α, δ, M, N, p 0 . Evidently we can take α = (1 + √ γ)/2. Next we pick δ small enough so that α −1 (1 + δ) 1/2 < 2/(1 + γ). Then we choose M to be large enough so that C α,γ,δ,M < 4γ 2 /(1 + γ) 2 . Finally we choose N, p 0 so that for p 0 ≤ p ≤ 2 one has We conclude from (5.10) that for this choice of α, p 0 , if p 0 ≤ p ≤ 2, then |B α ϕ N (n 1 , k 1 , ..., n N , k N k 1 , ..., n N , k N (0, k, n 1 , k 1 , ..., n N , k N if n j = 0, 1 ≤ j ≤ N . The result follows now by summing the last inequality with respect to the n j , 1 ≤ j ≤ N , and N . Next we extend the previous method to the case where the random matrix b(·) is assumed to be a diagonal matrix. We cannot now compute the dimension of the linear spaces V k defined after Lemma 5.7. We can however estimate their dimension. It is easy to see that the dimension of V k is bounded above by (k + 1) d . Let Y k,j , 0 ≤ j ≤ J k , k = 0, 1, 2, ..., be an orthonormal set of variables in L 2 (Ω) with the property that Y 0,0 ≡ 1 and V k is spanned by the variables Y k ,j , 0 ≤ j ≤ J k , 0 ≤ k ≤ k, k = 0, 1, 2, ... . For a variable Y k,j let us denote by s = (k, j) the spin of that variable with modulus |s| = k. It follows from the definition of the spaces V k that then, We associate a modulus |s| with each s ∈ S M . If s ∈ S ∩ S M then the modulus of s is as in the previous paragraph. Otherwise if s = (r 1 , ..., r d , s ) then |s| = r 1 +...+r d +|s | = r 1 +..+r d +M > M . We can also associate a variable to each s ∈ S M . For s ∈ S ∩ S M we put Y s,M = Y s . If s = (r 1 , .., r d , s ) we put In analogy to before we define for n ∈ Z d , s ∈ S M , variables Y n,s,M by Y n,s,M (·) = Y s,M (τ n ·). We may also define the Fock space F 2 S M (Z d ) and a mapping U α,M corresponding to (5.1). Thus for ψ ∈ F 2 S M (Z d ) one has, Proof. Just as in Lemma 5.1 we have that s 1 , ..., n N , s N )ψ N (n 1 , s 1 , . . . , n N , s N  If we use the Schwarz inequality on the RHS of (5.14) we have We consider now the sum, If |s| > M this sum is bounded by It is clear from these last two inequalities that we may choose M , depending only on α, such that (5.15) is bounded above by 1 for all s ∈ S M . The result follows now since Y 2 n j ,s j ,M ≤ 1, 1 ≤ j ≤ N, N = 1, 2, ... .
• Suppose |s 1 | = 1, n j = 0, 2 ≤ j ≤ N . Then Proof. We have just as in Lemma 5.8 that if λ s , 0 ≤ |s| < M, are parameters then Arguing as in Lemma 5.8 we conclude from the above inequality that We fix points {n 1 , ..., n N } ∈ Z d,N with n j = 0, 1 ≤ j ≤ N , and s j ∈ S M , 1 ≤ j ≤ N . We then define parameters λ s , s ∈ S M , and λ 0 by (n 1 , s 1 , . . . , n N , s N ), λ s = ϕ N +1 (0, s, n 1 , s 1 , . . . , n N , s N Then, N (n 1 , s 1 , . . . , n N , s N (0, s, n 1 , s 1 , . . . , n N , s N From the Schwarz inequality the RHS of the last equation is bounded above by for any δ > 0. In view of (5.17) the sum of the first three terms in the last expression is bounded above by The fourth term is bounded above by The final term in the expression is bounded above by It is also clear from (5.16) that The result follows from the last set of inequalities by first picking α, δ, 0 < α < 1, δ > 0, such that (1 + δ) 1/2 α −1 γ < 2γ/(1 + γ). Then M 0 can be chosen large enough, depending on δ, α so that the sum (5.18) is bounded above by We may easily deduce from the proof of Lemma 5.10 the analogue of Lemma 5.8. Lemma 5.11. There exists α, M 0 , p 0 , 0 < α < 1, M 0 a positive integer, If we follow the development after Lemma 5.5 we can deduce from Lemma 5.11 the analogue of Lemma 5.6. Thus we may define a space of vector valued functions Theorem 1.2, with γ close to 1, follows from Corollary 5.1 provided we assume b(·) is diagonal. Next we deal with the case of nondiagonal b(·). We restrict ourselves first to the case d = 2. An arbitrary real symmetric 2 × 2 matrix can be written as  The random matrix induces random values of the variables λ, µ, θ as follows: If λ(·) > µ(·) then θ(·) is the unique angle, 0 ≤ θ < π, such that [λ(·) − µ(·)] sin 2θ = −2b 12 (·). If λ(·) = µ(·) then we take θ(·) = 0.
We consider again the linear spaces V k defined after Lemma 5.7. The dimension of V k is bounded by (k + 1) d(d+1)/2 which is (k + 1) 3 when d = 2. Just as before we let Y k,j , 0 ≤ j ≤ J k , k = 0, 1, 2, ... be an orthonormal set of variables in L 2 (Ω) with the property that Y 0,0 ≡ 1 and V k is spanned by the variables Y k ,j , 0 ≤ j ≤ J k , 0 ≤ k ≤ k, k = 0, 1, 2, .... . For a variable Y k,j we denote by s = |s| = k. It follows from the definition of the spaces V k that then, where S is the set of spins s = (k, j) and λ i,s are arbitrary parameters. We associate a modulus |s| with each s ∈ S M,K . If s ∈ S ∩S M,K then the modulus of s is as in the previous paragraph. Otherwise if s = ( , m, r, k, s ) then |s| = +m+|r|+|s | = +m+|r|+M. We associate a variable to each s ∈ S M,K . For s ∈ S ∩S M,K we put Y s,M,K = Y s . If s = ( , m, r, k, s ) we put Y s,M,K = X ,m,r Y s − X ,m,r Y s . It is clear that Y s,M,K = 0. If we use the fact that the L ∞ norm of the Legendre polynomials is 1 (see, for example, [1]) then we see also that Y 2 s,M,K ≤ 8|s| 2 + 1, s ∈ S M,K . In analogy to before we define for n ∈ Z 2 , s ∈ S M,K , variables Y n,s,M,K by Y n,s,M,K (·) = Y s,M,K (τ n ·). We may also define the Fock space F 2 S M,K (Z d ) and a mapping U α,M,K corresponding to (5.1). Thus for ψ ∈ F S M,K (Z d ) one has Proof. We can use the same argument as in Lemma 5.9 since we know that Y 2 s,M,K ≤ 8|s| 2 + 1.
Next we define for any i, j, 1 ≤ i, j ≤ 2, an operator B i,j,α,M,K on  Proof. Suppose Ψ ∈ F 2 S M,K (Z 2 ). We fix points {n 1 , ..., n N } ∈ Z 2,N with n j = 0, 1 ≤ j ≤ N and s j ∈ S M,K , 1 ≤ j ≤ N . We define parameters λ i,s , s ∈ S M,K and λ i,0 , i = 1, 2 by N (n 1 , s 1 , ..., n N , s N ), i = 1, 2, λ i,s = Ψ i, N +1 (0, s, n 1 , s 1 , ..., n N , s N ), ∈ S M,K , i = 1, 2. for any δ > 0. In view of (5.21) it follows that the sum of the first two terms in the last expression is bounded above by In view of (5.19) the sum of the next two terms is bounded above by The fifth term is bounded above by The sixth term is bounded above by The final term is bounded by The result follows now exactly as in Lemma 5.10 by choosing α, 0 < α < 1, such that α −3 γ < 2γ/(1 + γ), then choosing δ small and K large so that (1 + δ −1 )/K is small.
We can easily extend the argument in Lemma 5.13 to obtain: Corollary 5.2. There exists α, M 0 , K 0 , p 0 , 0 < α < 1, M 0 , K 0 positive integers, 1 < p 0 < 2, depending only on γ such that if p 0 ≤ p ≤ 2 then B α,M 0 ,K 0 is a bounded operator on F p S M 0 ,K 0 ,2 (Z 2 ) with norm B α,M 0 ,K 0 ≤ 2γ/(1 + γ). Theorem 1.2, with γ close to 1, follows from Corollary 5.2 just as before. Since it is clear that one can extend the previous argument to d > 2, the proof of Theorem 1.2 is complete. The proof of Theorem 1.3 follows in a similar manner.