Gaussian limits for random geometric measures

Given n independent random marked d -vectors X i with a common density, deﬁne the measure ν n = P i ξ i , where ξ i is a measure (not necessarily a point measure) determined by the (suitably rescaled) set of points near X i . Technically, this means here that ξ i stabilizes with a suitable power-law decay of the tail of the radius of stabilization. For bounded test functions f on R d , we give a central limit theorem for ν n ( f ), and deduce weak convergence of ν n ( · ), suitably scaled and centred, to a Gaussian ﬁeld acting on bounded test functions. The general result is illustrated with applications to measures associated with germ-grain models, random and cooperative sequential adsorption, Voronoi tessellation and k -nearest neighbours graph.


Introduction
This paper is concerned with the study of the limiting behaviour of random measures based on marked Poisson or binomial point processes in d-dimensional space, arising as the sum of contributions from each point of the point process. Many random spatial measures can be described in these terms, and general limit theorems, including laws of large numbers, central limit theorems, and large deviation principles, are known for the total measure of such measures, based on a notion of stabilization (local dependence); see (21; 22; 23; 25).
Recently, attention has turned to the asymptotic behaviour of the measure itself (rather than only its total measure), notably in (3; 11; 18; 19; 24; 25). It is of interest to determine when one can show weak convergence of this measure to a Gaussian random field. As in Heinrich and Molchanov (11), Penrose (18), one can consider a limiting regime where a homogeneous Poisson process is sampled over an expanding window. In an alternative limiting regime, the intensity of the point process becomes large and the point process is locally scaled to keep the average density of points bounded; the latter approach allows for point processes with non-constant densities and is the one adopted here.
A random measure is said to be exponentially stabilizing when the contribution of an inserted point is determined by the configuration of (marked) Poisson points within a finite (though in general random) distance, known as a radius of stabilization, having a uniformly exponentially decaying tail after scaling of space. Baryshnikov and Yukich (3), have proved general results on weak convergence to a limiting Gaussian field for exponentially stabilizing measures. A variety of random measures are exponentially stabilizing, including those concerned with nearest neighbour graph, Voronoi and Delaunay graph, germ-grain models with bounded grains, and sequential packing.
In the present work we extend the results of (3) in several directions. Specifically, in (3) attention is restricted to the case where the random measure is concentrated at the points of the underlying point process, and to continuous test functions; we relax both of these restrictions, and so are able to include indicator functions of Borel sets as test functions. Moreover, we relax the condition of exponential stabilization to power-law stabilization.
We state our general results in Section 2. Our approach to proof may be summarized as follows.
In the case where the underlying point process is Poisson, we obtain the covariance structure of our limiting random field using the objective method, which is discussed in Section 3. To show that the limiting random field is Gaussian, we borrow normal approximation results from (24) which were proved there using Stein's method (in contrast, (3) uses the method of moments). Finally, to de-Poissonize the central limit theorems (i.e., to extend them to binomial point processes with a non-random number of points), in Section 5 we perform further second moment calculations using a version of the objective method. This approach entails an annoyingly large number of similar calculations (see Lemmas 3.7 and 5.1) but avoids the necessity of introducing a notion of 'external stabilization' (see Section 2) which was used to deal with the second moment calculations for de-Poissonization in (3). This, in turn, seems to be necessary to include germgrain models with unbounded grains; this is one of the fields of applications of the general results which we discuss in Section 6. Others include random measures arising from random and cooperative sequential adsorption processes, from Voronoi tessellations and from k-nearest neighbour graphs. We give our proofs in the general setting of marked point process, which is the context for many of the applications.
We briefly summarize the various notions of stabilization in the literature. The Gaussian limit theorems in (18; 21) require external stabilization but without any conditions on the tail of the radius of stabilization. The laws of large numbers in (19; 23) require only 'internal' stabilization of the same type as in the present paper (see Definition 2.2) but without tail conditions. In (3), both internal stabilization (with exponential tail conditions) and (for binomial point processes) external stabilization are needed for the Gaussian limits, while in the present paper we derive Gaussian limits using only internal stabilization with power-law tail conditions (see Definition 2.4), although it seems unlikely that the order of power-law decay required in our results is the best possible.

Notation and results
Let (M, F M , µ M ) be a probability space (the mark space). Let d ∈ N. Let ξ(x; X , A) be an R-valued function defined for all triples (x; X , A), where X ⊂ R d × M is finite and where x = (x, t) ∈ X (so x ∈ R d and t ∈ M), and A is a Borel set in R d . We assume that (i) for Borel A ⊂ R d the function (x, X ) → ξ(x; X , A) is Borel-measurable, and (ii) for each x, X the function ξ(x; X ) := ξ(x; X , ·) is a σ-finite measure on R d . (Our results actually hold when ξ(x; X ) is a signed measure with σ-finite total variation; see the remarks at the end of this section.) We view each x = (x, t) ∈ R d × M as a marked point in R d and X as a set of marked points in R d . Thus ξ(x; X ) is a measure determined by the marked point x = (x, t) and the marked point set X . We think of this measure as being determined by the marked points of X lying 'near' to x (in a manner to be made precise below), and the measure itself as being concentrated 'near' x; in fact, in many examples the measure ξ(x; X ) is a point mass at x of magnitude determined by X (see condition A1 below). Even when this condition fails we shall sometimes refer to ξ((x, t); X ) as the measure 'at x' induced by X . Suppose x = (x, t) ∈ R d × M and X ⊂ R d × M is finite. If x / ∈ X , we abbreviate notation and write ξ(x; X ) instead of ξ(x; X ∪ {x}). We also write X x,t := X x := X ∪ {x}. (2.1) Given a > 0 and y ∈ R d , we let y + ax := (y + ax, t) and y + aX := {y + az : z ∈ X }; in other words, scalar multiplication and translation act on only the first component of elements of R d × M. For A ⊆ R d we write y + aA for {y + ax : x ∈ A}. We say ξ is translation invariant if ξ((x, t); X , A) = ξ(y + x, t; y + X , y + A) for all y ∈ R d , all finite X ⊂ R d × M and x ∈ X , and all Borel A ⊆ R d . Some of the general concepts defined in the sequel can be expressed more transparently when ξ is translation invariant.
Another simpler special case is that of unmarked points, i.e., point processes in R d rather than R d × M. The marked point process setting generalizes the unmarked point process setting because we can take M to have a single element and then identify points in R d × M with points in R d . In the case where M has a single element t 0 , it is simplest to think of bold-face elements such as x as representing unmarked elements of R d ; in the more general marked case the bold-face x represents a marked point (x, t) with corresponding spatial location given by x.
Let κ be a probability density function on R d . Abusing notation slightly, we also let κ denote the corresponding probability measure on R d , i.e. write κ(A) for A κ(x)dx, for Borel A ⊆ R d . Let κ ∞ denote the supremum of κ(·), and let supp(κ) denote the support of κ, i.e., the smallest closed set B in R d with κ(B) = 1. We assume throughout that κ is Lebesgue-almost everywhere continuous.
For λ > 0 and n ∈ N, define the following point processes in R d × M: • P λ : a Poisson point process with intensity measure λκ × µ M .
• X n : a point process consisting of n independent identically distributed random elements of R d × M with common distribution given by κ × µ M .
• H λ : a Poisson point process in R d ×M with intensity λ times the product of d-dimensional Lebesgue measure and µ M (the H stands for 'homogeneous').
•H λ : an independent copy of H λ .
Suppose we are given a family of open subsets Ω λ of R d , indexed by λ > 0. Assume the sets Ω λ are nondecreasing in λ, i.e. Ω λ ⊆ Ω λ ′ for λ < λ ′ . Denote by Ω ∞ the limiting set, i.e. set Ω ∞ := ∪ λ≥1 Ω λ . Suppose we are given a further Borel set Ω (not necessarily open) with For λ > 0, and for finite X ⊂ R d × M with x = (x, t) ∈ X , and Borel A ⊂ R d , let ξ λ (x; X , A) := ξ(x; x + λ 1/d (−x + X ), x + λ 1/d (−x + A))1 Ω λ (x). (2.2) Here the idea is that the point process x + λ 1/d (−x + X ) is obtained by a dilation, centred at x, of the original point process. We shall call this the λ-dilation of X about x. Loosely speaking, this dilation has the effect of reducing the density of points by a factor of λ. Thus the rescaled measure ξ λ (x; X , A) is the original measure ξ at x relative to the image of the point process X under a λ-dilation about x, acting on the image of 'space' (i.e. the set A) under the same λ-dilation.
When ξ is translation invariant, the rescaled measure ξ λ simplifies to Our principal objects of interest are the random measures µ ξ λ and ν ξ λ,n on R d , defined for λ > 0 and n ∈ N, by µ ξ λ := x∈P λ ξ λ (x; P λ ); ν ξ λ,n := x∈Xn ξ λ (x; X n ). (2.4) We are also interested in the centred versions of these measures µ ξ λ := µ ξ λ − E [µ ξ λ ] and ν ξ λ,n := ν ξ λ,n − E [ν ξ λ,n ] (which are signed measures). We study these measures via their action on test functions in the space B(Ω) of bounded Borel-measurable functions on Ω. We let B(R d ) denote the subclass of B(Ω) consisting of those functions that are Lebesgue-almost everywhere continuous. When Ω = R d , we extend functions f ∈ B(Ω) to R d by setting f (x) = 0 for x ∈ R d \ Ω.
The indicator function 1 Ω λ (x) in the definition (2.2) of ξ λ means that only points x ∈ Ω λ × M contribute to µ ξ λ or ν ξ λ,n . In most examples, the sets Ω λ are all the same, and often they are all R d . However, there are cases where moments conditions such as (2.7) and (2.8) below hold for a sequence of sets Ω λ but would not hold if we were to take Ω λ = Ω ∞ for all λ; see, e.g. (20). Likewise, in some examples (such as those concerned with Voronoi tessellations), the measure ξ(x; X ) is not finite on the whole of R d but is well-behaved on Ω; hence the restriction of attention to test functions in B(Ω).
Similarly, let f, ν ξ λ,n := f, ν ξ λ,n − E f, ν ξ λ,n . Let | · | denote the Euclidean norm on R d , and for x ∈ R d and r > 0, define the ball B r (x) := {y ∈ R d : |y − x| ≤ r}. We denote by 0 the origin of R d and abbreviate B r (0) to B r . Finally, define B * r to be the set of marked points distant at most r from the origin, i.e. set We say a set X ⊂ R d × M is locally finite if X ∩ B * r is finite for all r > 0. For x = (x, t) ∈ R d × M and Borel A ⊆ R d , we extend the definition of ξ(x, t; X , A) to locally finite infinite point sets X by setting ξ(x; X , A) := lim sup r→∞ ξ(x; X ∩ B * r , A).
Also, we define the x-shifted version ξ x ∞ (·, ·) of ξ(x; ·, ·) by Definition 2.1. Let T , T ′ and T ′′ denote generic random elements of M with distribution µ M , independent of each other and of all other random objects we consider. Similarly, let X and X ′ denote generic random d-vectors with distribution κ, independent of each other and of all other random objects we consider. Set X := (X, T ) and X ′ := (X ′ , T ′ ).
Our limit theorems for µ ξ λ require certain moments conditions on the total mass of the rescaled measure ξ λ at x with respect to the point process P λ (possibly with an added marked point), for an arbitrary point x ∈ R d carrying a generic random mark T . More precisely, for p > 0 we consider ξ satisfying the moments conditions We extend notions of stabilization, introduced in (21; 23; 3), to the present setting. Given Borel subsets A, A ′ of R d , the radius of stabilization of ξ at (x, t) with respect to X and A, A ′ is a random distance R with the property that the restriction of the measure ξ((x, t); X ) to x + A ′ is unaffected by changes to the points of X in x + A at a distance greater than R from x. The precise definition goes as follows.
the radius of stabilization of ξ at x with respect to X and A, A ′ ) to be the smallest integer-valued r such that r ≥ 0 and In the case where ξ is translation-invariant, R((x, t); X ) = R((0, t); X ) so that R((x, t); X ) does not depend on x. Of particular importance to us will be radii of stabilization with respect to the homogeneous Poisson processes H λ and with respect to the non-homogeneous Poisson process P λ , suitably scaled.
We assert that R(x; X , A, A ′ ) is a measurable function of X , and hence, when X is a random point set such as H λ or P λ , R(x; X , A) is an N ∪ {∞}-valued random variable. This assertion is demonstrated in (19) for the case A = A ′ = R d , and the argument carries over to general A, A ′ . The next condition needed for our theorems requires finite radii of stabilization with respect to homogeneous Poisson processes, possibly with a point inserted, and, in the non translationinvariant case, also requires local tightness of these radii. We use the notation from (2.1) in this definition. Definition 2.3. For x ∈ R d and λ > 0, we shall say that ξ is λ-homogeneously stabilizing at x if for all z ∈ R d , In the case where ξ is translation-invariant, R(x, t; X ) does not depend on x, and ξ x,T ∞ (·) does not depend on x so that the simpler-looking condition suffices to guarantee condition (2.9).
We now introduce notions of exponential and power-law stabilization. The terminology refers to the tails of the distributions of radii of stabilization with respect to (dilations of) the nonhomogeneous point processes P λ and X n .
For k = 2 or k = 3, let S k denote the set of all finite A ⊂ supp(κ) with at most k elements (including the empty set), and for nonempty A ∈ S k , let A * denote the subset of supp(κ) × M (also with k elements) obtained by equipping each element of A with a µ M -distributed mark; for example, for A = {x, y} ∈ S 2 set A * = {(x, T ′ ), (y, T ′′ )}.
When A is the empty set ∅ we write R λ,n (x, t) for R λ,n (x, t; ∅).
It is easy to see that if ξ is exponentially stabilizing for κ then it is power-law stabilizing of all orders for κ. Similarly, if ξ is binomially exponentially stabilizing for κ then it is binomially power-law stabilizing of all orders for κ.
In the non translation-invariant case, we shall also require the following continuity condition.
In the unmarked case, this says simply that the total measure of ξ(x; X ) is almost everywhere continuous in (x, X ), as is the measure of a ball ξ(x; X , B r ) for large r.
Definition 2.5. We say ξ has almost everywhere continuous total measure if there exists K 1 > 0 such that for all m ∈ N and Lebesgue-almost all (x, is continuous at (y, y 1 , . . . , y m ) = (x, x 1 , . . . , x m ).
Define the following formal assumptions on the measures ξ.
A3: ξ has almost everywhere continuous total measure.
Our next result gives the asymptotic variance of f, µ ξ λ for f ∈ B(Ω). The formula for this involves the quantity V ξ (x, a) defined for x ∈ R d and a > 0 by the following formula which uses the notation introduced in (2.1) and (2.6): (2.14) In the translation invariant and unmarked case the first term in the integrand in the right hand side of (2.13) reduces to In general, the integrand can be viewed as a pair correlation function (in the terminology of (3)), which one expects to decay rapidly as |z| gets large, because an added point at z should have little effect on the measure at 0 and vice versa, when |z| is large.
Suppose also that ξ satisfies the moments conditions (2.7) and (2.8) for some p > 2, and is power-law stabilizing for κ of order q for some q with q > p/(p − 2). Suppose also that Assumption A2 or A3 holds. Suppose either that f ∈B(Ω), or that A1 holds and f ∈ B(Ω). Then the integral in (2.13) converges for κ-almost all x ∈ Ω ∞ , and σ ξ,κ f,f < ∞, and Our next result is a central limit theorem for λ −1/2 f, µ ξ λ . Let N (0, σ 2 ) denote the normal distribution with mean 0 and variance σ 2 (if σ 2 > 0) or the unit point mass at 0 if σ 2 = 0. We list some further assumptions.
The corresponding results for the random measures ν ξ λ,n require further conditions. These extend the previous stabilization and moments conditions to binomial point processes. Our extra moments condition is on the total mass of the rescaled measure ξ λ at x with respect to the binomial point process X m with m close to λ and with up to three added marked points, for an arbitrary randomly marked point x ∈ R d . That is, we require We give strengthened versions of A4 and A5 above, to include condition (2.15) and binomial stabilization.
For x ∈ R d and a > 0, set This may be viewed as a 'mean add one cost'; it is the expected total effect of an inserted marked point at the origin on the mass, using the x-shifted measure ξ 3) for κ-almost all x ∈ R d , satisfies Assumption A2 or A3, and also satisfies A4 ′ or A5 ′ . Then for any sequence (λ(n), n ∈ N) taking values in (0, ∞), such that lim sup n→∞ n −1/2 |λ(n) − n| < ∞, and any f ∈B(R d ), we have as n → ∞ that If, in addition, Assumption A1 holds, then these conclusions also hold for f ∈ B(R d ).
Remarks. Since σ ξ,κ f,g and τ ξ,κ f,g are bilinear in f and g, it is easy to deduce from the 'convergence of variance' conclusions in Theorems 2.1 and 2.3 the corresponding 'convergence of covariance' statement for any two test functions f, g. Also, standard arguments based on the Cramér-Wold device (see e.g. (17), (5)), show that under the conditions of Theorem 2.2, respectively Theorem 2.3, we can deduce convergence of the random field (λ −1/2 f, µ ξ λ , f ∈B(Ω)), respectively (n −1/2 f, ν ξ λ(n),n , f ∈B(Ω)), to a mean-zero finitely additive Gaussian field with covariances given by σ ξ,κ f,g , respectively τ ξ,κ f,g . If also A1 holds then the domain of the random field can be extended to functions f ∈ B(Ω).
Theorems 2.1, 2.2 and 2.3 resemble the main results of Baryshnikov and Yukich (3), in that they provide central limit theorems for random measures under general stabilization conditions. We indicate here some of the ways in which our results extend those in (3).
In (3), attention is restricted to cases where assumption A1 holds, i.e., where the contribution from each point to the random measures is a point mass at that point. It is often natural to drop this restriction, for example when considering the volume or surface measure associated with a germ-grain model, examples we shall consider in detail in Section 6.
Another difference is that under A1, we consider bounded test functions in B(Ω) whereas in (3), attention is restricted to continuous bounded test functions. By taking test functions which are indicator functions of arbitrary Borel sets A 1 , . . . , A m in Ω, we see from Theorem 2.2 that under Assumption A1, the joint distribution of (λ −1/2μ ξ λ (A i ), 1 ≤ i ≤ m) converges to a multivariate normal with covariances given by A i ∩A j V ξ (κ(x))κ(x)dx, and likewise for ν ξ λ(n),n by Theorem 2.3. This desirable conclusion is not achieved from the results of (3), because indicator functions of Borel sets are not continuous. When our assumption A1 fails, for the central limit theorems we restrict attention to almost everywhere continuous test functions, which means we can still obtain the above conclusion provided the sets A i have Lebesgue-null boundary.
The de-Poissonization argument in (3) requires finiteness of what might be called the radius of external stabilization; see Definition 2.3 of (3). Loosely speaking, an inserted point at x is not affected by and does not affect points at a distance beyond the radius of external stabilization; in contrast an inserted point at x is unaffected by points at a distance beyond the radius of stabilization, but might affect other points beyond that distance. Our approach does not require external stabilization, which brings some examples within the scope of our results that do not appear to be covered by the results of (3). See the example of germ-grain models, considered in Section 6.
In the non-translation-invariant case, we require ξ to have almost everywhere continuous total measure, whereas in (3) the functional ξ is required to be in a class SV(4/3) of 'slowly varying' functionals. The almost everywhere continuity condition on ξ is simpler and usually easier to check than the SV(4/3) condition, which requires a form of uniform Hölder continuity of expected total measures (see the example of cooperative sequential adsorption in Section 6.2).
We assume that the density κ is almost everywhere continuous with κ ∞ < ∞, and for Theorems 2.2 and 2.3 that supp(κ) is bounded. In contrast, in (3) it is assumed that κ has compact convex support and is continuous on its support (see the remarks just before Lemma 4.2 of (3)).
Our moments condition (2.8) is simpler than the corresponding condition in (3) (eqn (2.2) of (3)). Using A7 and A5 ′ in Theorems 2.2 and 2.3, we obtain Gaussian limits for random fields under polynomial stabilization of sufficiently high order; the corresponding results in (3) require exponential stabilization.
We spell out the statement and proof of Theorems 2.2 and 2.3 for the setting of marked point processes (i.e. point processes in R d × M rather than in R d ), whereas the proofs in earlier works (3; 21) are given for the setting of unmarked point process (i.e., point processes in R d ). The marked point process setting includes many interesting examples such as germ-grain models and on-line packing; as mentioned earlier, it generalizes the unmarked setting.
Other papers concerned with central limit theorems for random measures include Heinrich and Molchanov (11) and Penrose (18). The setup of (11) is somewhat different from ours; the emphasis there is on measures associated with germ-grain models and the method for defining the measures from the marked point sets (eqns (3.7) and (3.8) of (11)) is more prescriptive than that used here. In (11) the underlying point processes are taken to be stationary point processes satisfying a mixing condition and no notion of stabilization is used, whereas we restrict attention to Poisson or binomial point processes but do not require any spatial homogeneity.
The setup in (18) is closer to that used here (although the proof of central limit theorems is different) but has the following notable differences. The point processes considered in (18) are assumed to have constant intensity on their support. The notion of stabilization used in (18) is a form of external stabilization. For the multivariate central limit theorems in (18) to be applicable, the radius of external stabilization needs to be almost surely finite but, unlike in the present work, no bounds on the tail of this radius of stabilization are required. The test functions in (18) lie in a subclass ofB(Ω), not B(Ω). The description of the limiting variances in (18) is different from that given here.
Our results carry over to the case where ξ(x; X , ·) is a signed measure with finite total variation. The conditions for the theorems remain unchanged if we take signed measures, except that if ξ is a signed measure, the moments conditions (2.7), (2.8), and (2.15) need to hold for both the positive and the negative part of ξ. The proofs need only minor modifications to take signed measures into account.
In general, the limiting variance τ ξ,κ f,f or could be zero, in which case the corresponding limiting Gaussian variable given by Theorem 2.3 is degenerate. In most examples this seems not to be the case. We do not here address the issue of giving general conditions guaranteeing that the limiting variance is nonzero, except to refer to the arguments given in (3), themselves based on those in (21), and in (1).

Weak convergence lemmas
A key part of the proof of Theorems 2.1 and 2.3 is to obtain certain weak convergence results, namely Lemmas 3.4, 3.5, 3.6 and 3.7 below. It is noteworthy that in all of these lemmas, the stabilization conditions used always refer to homogeneous Poisson processes on R d ; the notion of exponential stabilization with respect to a non-homogeneous point process is not used until later on.
To prove these lemmas, we shall use a version of the 'objective method', building on ideas in (19). We shall be using the Continuous Mapping Theorem ((5), Chapter 1, Theorem 5.1), which says that if h is a mapping from a metric space E to another metric space E ′ , and X n are E-valued random variables converging in distribution to X which lies almost surely at a continuity point of h, then h(X n ) converges in distribution to h(X).
we use the following metric on L: |dy tends to zero as ε ↓ 0, and that the Lebesgue Density Theorem tells us that almost every x ∈ R d is a Lebesgue point of f . Define the region The next result says that with an appropriate coupling, the λ-dilations of the point processes P λ about a net (sequence) of points y(λ) which approach x sufficiently fast, approximate to the homogeneous Poisson process H κ(x) as λ → ∞. The result is taken from (19), and for completeness we give the proof in the Appendix.
Then there exist coupled realizations P ′ λ and H ′ κ(x) of P λ and H κ(x) , respectively, such that In the next result, we assume the point processes X m are coupled together in the natural way; that is, we let (X 1 , T 1 ), (X 2 , T 2 ), . . . denote a sequence of independent identically distributed random elements of R d ×M with distribution κ×µ M , and assume the point processes X m , m ≥ 1 are given by The next result says that when ℓ and m are close to λ, the λ-dilation of X ℓ about x and the λ-dilation of X m about y, with y = x, approach independent homogeneous Poisson processes H κ(x) and H κ(y) as λ becomes large. Again we defer the proof (taken from (19)) to the Appendix.
Then as k → ∞, For λ > 0, let ξ * λ ((x, t); X , ·) be the point measure at x with the total mass ξ λ ((x, t); X , Ω), i.e., for Borel A ⊆ R d let The next lemma provides control over the difference between the measure ξ λ (x; X , ·) and the corresponding point measure ξ * λ (x; X , ·). Again we give the proof (taken from (19)) in the Appendix for the sake of completeness. and Suppose that x ∈ Ω 0 and z ∈ R d , and that ξ is κ(x)-homogeneously stabilizing at x. Let K > 0, and suppose either that Assumption A2 holds or that A3 holds and K > K 1 . Then Taking our topology on R d × M × L to be the product of the Euclidean topology on R d , the discrete topology on M and the topology induced by the metric D on L which was defined at (3.1), we have from Lemma 3.1 that as λ → ∞, If Assumption A2 (translation invariance) holds, then the functional g A (w, t, X ) does not depend on w, so that g A (w, t, X ) = g A (0, t, X ) and by the assumption that ξ is κ(x)-homogeneously stabilizing at x, we have that (0, T, H κ(x) ) almost surely lies at a continuity point of the functional g A .
If, instead, Assumption A3 (continuity) holds, take A = R d or A = B K or A = R d \ B K , with K > K 1 and K 1 given in Definition 2.5. Then by the assumption that ξ is κ(x)-homogeneously stabilizing at x (see (2.9)), with probability 1 there exists a finite (random) η > 0 such that for D(X , H κ(x) ) < η, and for |w| < η, Hence, (0, T, H κ(x) ) almost surely lies at a continuity point of the mapping g A in this case too.
for any K under A2 and for K > K 1 under A3, the mapping g A satisfies the conditions for the Continuous Mapping Theorem, and this with (3.10) and (2.6) gives us (3.9).
The next two lemmas are key ingredients in proving Theorem 2.1 on convergence of second moments. In proving these, we use the notation (3.11) and for f ∈ B(Ω) we write f ∞ for sup{|f (x)| : x ∈ Ω}. The next result says that the total ξ λmeasure at x induced by P λ converges weakly to the measure ξ x,T ∞ induced by the homogeneous Poisson process H κ(x) .
almost surely, and that Assumption A2 or A3 holds. Then Since we assume ξ x,T ∞ (H κ(x) , R d ) < ∞ almost surely, the last expression tends to zero in probability as K → ∞ and hence ξ λ (v λ , T ; P λ , R d \ Ω) also tends to zero in probability. Combining this with the case A = R d of (3.9) and using Slutsky's theorem, we obtain (3.12).
Given K > 0, by (3.9) we have where the limit is almost surely finite and converges in probability to zero as K → ∞. Hence for ε > 0, we have Also, given K > 0, it is the case that and by continuity of to the finite random variable ξ x,T ∞ (H κ(x) , R d ) by the case A = R d of (3.9), and hence the right hand side of (3.15) tends to zero in probability as λ → ∞. Combined with (3.14), this gives us Also, by (3.12) and continuity of f at x, we have , and combined with (3.16) this yields (3.13).
The next lemma is a refinement of the preceding one and concerns the convergence of the joint distribution of the total ξ λ -measure induced by P λ at x and at a nearby point x + λ −1/d z, rather than at a single point; as in Section 2, by 'the ξ λ -measure at x induced by X ' we mean the measure ξ λ ((x, t), X , ·). In the following result the expressions P x,T λ and P x+λ −1/d z,T ′ λ represent Poisson processes with added marked points, using notation from (2.1) and from Definition 2.1.
Lemma 3.6. Let x ∈ Ω 0 , z ∈ R d , and K > 0. Suppose either that ξ satisfies Assumption A2 or that ξ satisfies A3 and K > K 1 . Then as λ → ∞ we have and for any f ∈ B(Ω) with f continuous at x, as λ → ∞. Taking A = B K gives us (3.17).
and this limit tends to zero in probability as K → ∞; hence Combining this with the case A = R d of (3.20) and using Slutsky's theorem in two dimensions, we obtain (3.18).
The first term in the right hand side of (3.21) tends to zero in probability for any fixed K, by (3.18) and the fact that ξ x,T ∞ (H z,T ′ κ(x) , R d ) is almost surely finite. Also by (3.20), the second term in the right hand side of (3.21) converges in distribution, as λ → ∞, , which tends to zero in probability as K → ∞. Hence, by (3.21) we obtain We also have By (3.20) and the assumed continuity of f at x, the first term in the right side of (3.23) tends to zero in probability for any fixed K, while the second term converges in distribution to , which tends to zero in probability as K → ∞. Hence, as By continuity of f at x, and (3.18), we have Combining this with (3.22) and (3.24) yields (3.19).
The following lemma will be used for de-Poissonizing our central limit theorems. Essentially, it is a de-Poissonized version of Lemmas 3.5 and 3.6, referring to X n with various added points, rather than to P λ as in the earlier lemmas. To ease notation, we do not mention the marks in the notation for the statement and proof of this result. Recall thatH λ denotes an independent copy of H λ .
, are all almost surely finite. Let f ∈ B(Ω) and suppose either that Assumption A1 holds, or that x and y are continuity points of f . Given integer-valued functions (ℓ(λ), λ ≥ 1) and (m(λ), λ ≥ 1) with ℓ(λ) ∼ λ and m(λ) ∼ λ as λ → ∞, we have convergence in joint distribution, as λ → ∞, of the 11-dimensional random vector Proof. First, we assert that This is deduced from Lemma 3.2 by an argument which we spell out only in the case of the third component. Defining the mapping h x on M × L by ) which is almost surely at a continuity point of h x . Similar arguments apply for the other components and give us the assertion above. This assertion implies that the result holds under A1, i.e. when ξ(x; X ) is a point mass at x. Now let us drop Assumption A1, but assume that x and y are continuity points of f . Then by and a similar argument to the proof of (3.22) (working with X m instead of P λ ) yields Very similar arguments (which we omit) yield Combining these eleven convergence in probability statements with the fact that we have established our conclusion in the case where Assumption A1 holds, and using Slutsky's theorem, we obtain our conclusion in the other case as well.

Proof of Theorems 2.1 and 2.2
Recall that by the definition nondecreasing family of open subsets of R d with limit set Ω ∞ and Ω ∞ ⊆ Ω ⊆ R d . In the simplest case Ω λ = R d for all λ.
In the sequel, we fix a test function f ∈ B(Ω). Set and The next identity is obtained by writing f, µ λ as an integrated two point function and using the change of variable y = x + λ −1/d z. Later, we shall use stabilization to establish convergence of α λ and β λ to obtain Theorem 2.1. .7) and (2.8) hold for p = 2. Then for λ ≥ 1, it is the case that α λ and β λ are finite and Proof. By Palm theory for the Poisson process (e.g. a slight generalization of Theorem 1.6 of (17)), we have and Combining (4.4) and (4.5), we have The first term in the right hand side of (4.6) equals λ −1 α λ . Thus, (4.6) yields and the change of variables y = x + λ −1/d z shows that this equals β λ as given by (4.2). Finally, conditions (2.7) and (2.8) for p = 2 guarantee that all integrals under consideration are finite.
Lemmas 3.5 and 3.6 establish limits in distribution for the variables inside the expectations in the integrands in the expressions (4.1) and (4.2) for α λ and β λ . To prove Theorem 2.1, we need to take these limits outside the expectations and also outside the integrals, which we shall do by a domination argument. It is in this step that we use the condition of stabilization with respect to non-homogeneous Poisson processes (Definition 2.4), via the following lemma, which is an estimate showing that the integrand in the definition (4.2) of β λ is small for large |z|, uniformly in λ. To ease notation, for x ∈ R d , z ∈ R d and λ > 0, we define random variables X = X x,z,λ , Similarly, we define random variables X * = X * x,z,λ , Z * = Z * x,z,λ , X * ′ = X * ′ x,z,λ and Z * ′ = Z * ′ x,z,λ , by , and the aim is to show that this has small absolute value for large z, independent of x and λ. We write a ∧ b for min(a, b) in the sequel.
Lemma 4.2. Suppose that ξ satisfies (2.7) and (2.8) for some p > 2, and is power-law stabilizing for κ of order q for some q > dp/(p − 2). Then there is a constant C 1 , independent of λ, such that for all λ ≥ 1, x ∈ R d and z ∈ R d , Proof. The left hand side of (4.12) and of (4.13) are both zero unless both x and x + λ −1/d z lie in Ω λ so we may assume this for the rest of this proof. Let X := X x,z,λ and Z := Z x,z,λ . Let X = X1 {R λ (x,T )≤|z|/3} and letZ = Z1 {R λ (x+λ −1/d z,T ′ )≤|z|/3} . ThenX andZ are independent because they are determined by the points of P λ in the balls of radius λ −1/d |z|/3 centred at x, , and By (2.8) and Hölder's inequality, and the assumed power-law stabilization of order q > dp/(p−2), there is a constant C 2 such that and likewise By a similar argument using (2.7), there is a constant C 3 such that Subtracting (4.15) from (4.14) and using (4.16), (4.17) and (4.18) along with the Cauchy-Schwarz inequality, we may deduce that there is a constant C 1 , independent of λ, such that for all λ ≥ 1, (4.12) holds. The argument for (4.13) is similar Some of the lemmas in Section 3 require as a condition that ξ x,T ∞ (H κ(x) , R d ) < ∞ almost surely. The next lemma shows that this condition follows from the conditions for the theorems stated in Section 2.
Proof. By Lemma 3.4, for large enough fixed K we have Since Ω ∞ is open and contained in Ω, there exists a constant λ 0 such that for λ ≥ λ 0 we have both x ∈ Ω λ and B λ −1/d K (x) ⊆ Ω and so by (2.7), for λ ≥ λ 0 the mean of the left side of (4.19) is bounded by a constant. Hence by (4.19) and Fatou's lemma, the mean of ξ x,t ∞ (H κ(x) , B K ) is bounded by a constant independent of K. Taking K to infinity, we find that ξ x,t ∞ (H κ(x) , R d ) has finite mean so is almost surely finite.
Now suppose that f ∈ B(Ω) and A1 holds. Then by (3.18), for almost all (x, z) ∈ Ω 0 × R d we have as λ → ∞ that By Lemma 4.2, the assumption that κ is bounded, and (4.26), there is a constant C such that for almost every (x, z) with κ(x) > 0, Lebesgue point of f , then for any K > 0, by (4.13) we have and combining this with (4.26) and the dominated convergence theorem gives us On the other hand, by (4.27) and the assumption that f is bounded, we have and combining this with (4.28) we have By the Lebesgue density theorem, almost all every x ∈ Ω 0 is a Lebesgue point of f . Hence, under A1, by (4.2), (4.24), the dominated convergence theorem, and combined with (4.21) and (4.3), this shows that λ −1 Var f, µ ξ λ → σ ξ,κ f,f , as required.
For the proof of the central limit theorem (Theorem 2.2), we shall use results on normal approximation for f, µ ξ λ , suitably scaled. In the case of point measures, these were proved by Stein's method in (24), and the method carries through to more general measures. Let Φ denote the standard normal distribution function.
Lemma 4.4. Suppose that Ω ∞ is bounded and κ ∞ < ∞. Suppose that ξ is exponentially stabilizing and satisfies the moments condition (2.7) for some p > 2. Let f ∈ B(Ω), and q ∈ (2, 3] with q < p. There exists a finite constant C depending on d, ξ, κ, q and f , such that for all λ > 1 Proof. In the case where Assumption A1 holds, i.e. ξ(x, t; X , ·) is always a point mass at x, this result is Theorem 2.3 of (24). If we do not make this assumption on ξ, the proof in (24) carries through with little change, except that the T λ and T ′ λ of Section 4.3 of (24) should now be defined (following notation of (24)) by Suppose Ω ∞ is bounded and κ ∞ < ∞. Suppose for some p > 3 that ξ is powerlaw stabilizing of order q for some q > d(150 + 6/p), and satisfies the moments condition (2.7). Let f ∈ B(Ω). Suppose that λ −1 Var f, µ ξ λ converges, as λ → ∞, to a finite limit σ 2 . Then f, λ −1/2 µ ξ λ converges in distribution, as λ → ∞, to the N 0, σ 2 distribution.
Proof. In the case where A1 holds, this result is Theorem 2.5 of (24). If we do not make this assumption on ξ, the proof in (24) carries through with the same minor changes as indicated for Lemma 4.4 above.
Proof of Theorem 2.2. Suppose that supp(κ) is bounded and κ ∞ < ∞. Suppose ξ is almost everywhere continuous, and is κ(x)−homogeneously stabilizing at x for κ-almost all x ∈ R d . Suppose ξ satisfies either A2 or A3, and satisfies A4.
Let f ∈ B(Ω), and assume either A1 holds or f ∈B(Ω). By Theorem 2.1, λ −1 Var f, µ ξ λ converges to the finite nonnegative limit σ ξ,κ f,f . So if σ ξ,κ f,f > 0, then the right hand side of (4.29) tends to zero, so that by Lemma 4.4 f,f ). If A5 holds instead of A4, we obtain the same conclusion by using Theorem 2.1 and Lemma 4.5; note that in A5, since p > 3 the condition q > d(150 + 6/p) ensures that q > dp/(p − 2), so that Theorem 2.1 still applies here.

Extension to the non-Poisson case
In this section we prove Theorem 2.3. We assume here that all the point processes X n are coupled as described at (3.4), in terms of a sequence ((X 1 , T 1 ), (X 2 , T 2 ), . . .) of independent random elements of R d × M with common distribution κ × µ M , Given f ∈ B(R d ) and λ > 0, for each m ∈ N we define where we set In this section we shall use the standard notation X p for the L p -norm E [|X| p ] 1/p of a random variable X, where p ≥ 1.
By using (5.1), expanding and taking expectations, we obtain We shall establish the limiting behaviour of each term of (5.3) in turn. First we have Here and below, all domains of integration, if not specified, are R d . We assume either f ∈B(Ω) or A1 holds, so by Lemma 3.7, for almost all (x, y) ∈ Ω 0 × Ω 0 , we have By (2.15), the variables f, ξ λ (x; X ℓ ) f, ξ λ (y; X x m−1 ) are uniformly integrable so (5.4) extends to convergence of expectations, and also the limit is bounded, so that setting Making the change of variables z = λ 1/d (w − x) we obtain By Lemma 3.7, for almost all (x, y) ∈ Ω 0 × Ω 0 , we have that By (2.15) and the Cauchy-Schwarz inequality, for large enough λ the variables in the left side of (5.9) are uniformly integrable. Therefore we have convergence of expectations, so that the integrand in (5.8) tends to , defined at (2.12). Also, by A4 ′ or A5 ′ , we assume for some p > 2 and q > 2dp/(p − 2) that the moments condition (2.15) holds and we have binomial power law stabilization of order q (in A5 ′ , since p > 3 the condition q > d(150 + 6/p) ensures that q > 2dp/(p − 2)). Therefore the Hölder and Minkowski inequalities yield Since q > dp/(p − 2), this is integrable in z. Set By (5.8) and the dominated convergence theorem we obtain Next, writing X 1 as x, X ℓ+1 as y, and X m+1 as (5.14) By Lemma 3.7 (note that we do not assume that ℓ < m in that result), for almost all (x, y) ∈ Ω 0 × Ω 0 , we have Using (2.15), we obtain convergence of expectations corresponding to (5.15). Hence, the integrand in (5.14) converges to the expression given at (5.10). By a similar argument to the one used to establish (5.11), the absolute value of this integrand is bounded by a constant times |z| q(2−p)/p ∧ 1, and this is integrable since q > dp/(p − 2). Hence, the dominated convergence theorem gives us Next, by taking X 1 = x, X 2 = y, X ℓ+1 = x + λ −1/d z and X m+1 = y + λ −1/d w, we have For almost all (x, y) ∈ Ω 0 × Ω 0 , Lemma 3.7 yields Also, the quantity on the left is uniformly integrable by the assumption that (2.15) holds for some p > 2. Hence we have corresponding convergence of expectations, so the integrand in (5.17) converges to Also, we have f, ξ λ (x; X y Hence, Hölder's inequality shows that the absolute value of the expectation in (5.17) is at most By the assumption (A4 ′ or A5 ′ ) that moments condition (2.15) holds for some p > 2, and that ξ is binomially power law stabilizing of order q > 2dp/(p − 2), this is bounded by a constant times (|z| q(2−p)/(2p) ∧ 1)(|w| q(2−p)/(2p) ∧ 1) which is integrable in (z, w). Therefore the dominated convergence theorem applied to (5.17) shows that By Lemma 3.7 the quantity inside the expectation tends to zero in probability for almost all x, y and all z. Hence its expectation tends to zero as well, since it is uniformly integrable by (2.15). Also, the absolute value of this expectation is bounded by a constant times |z| q(2−p)/p ∧ 1, by a similar argument to (5.11). Hence, dominated convergence yields By Lemma 3.7, for almost every x ∈ Ω 0 , we have and using (2.15), we have the corresponding convergence of expectations so that the integrand in (5.20) tends to zero. Also by (2.15), this integrand is bounded, and thus Next, setting X 1 = y, X ℓ+1 = x, and X m+1 = y + λ −1/d z, we find that By Lemma 3.7, for almost all (x, y) ∈ Ω 0 × Ω 0 and all z, as λ → ∞ we have so that the quantity inside the expectation in (5.22) tends to zero in probability and by (2.15) it is uniformly integrable. Hence the integrand in (5.22) tends to zero. Also, by a similar argument to (5.11), the absolute value of this integrand is bounded by a constant times |z| q(2−p)/p ∧1, which is integrable since q > pd/(p − 2). Thus, the integrand in (5.22) is bounded by an integrable function of (x, y, z) so the dominated convergence theorem shows that Next, write X ℓ+2 as x, X ℓ+1 as y, and X m+1 as By a similar argument to (5.11), the absolute value of the expectation inside the integral is bounded by a constant times |z| q(2−p)/p ∧ 1, which is integrable since q > dp/(p − 2). Therefore, the triple integral in (5.24) is bounded, and since m − ℓ − 1 = o(λ), it follows that as λ → ∞ we have The next result, similar but simpler in its proof than Lemma 5.1, provides a uniform second moments bound on F m,λ which will be used to establish the law of large numbers type behaviour of f, µ λ − ν λ,n in the de-Poissonization argument. Proof. We abbreviate notation as in the preceding proof. Note that our assumptions (in particular A4 ′ or A5 ′ ) imply that for some p > 2 and q > 2dp/(p − 2), (2.15) holds and ξ is binomially power-law stabilizing of order q.
Let m = m(λ), λ ≥ 1, be defined to satisfy m(λ) ∼ λ as λ → ∞. By a similar expansion to (5.3), we obtain We consider these terms one by one. First, E [Y 2 m+1 ] is bounded by (2.15). Second, setting use of Hölder's inequality, followed by (2.15) and the binomial power-law stabilization, shows that the absolute value of the expectation in the integrand is bounded by which is an integrable function of z. This shows that 2mE [Y m+1 ∆ m ] is bounded.
Finally, take X 1 = x and X m+1 = x + λ −1/d z, to obtain Since the quantity inside the expectation is zero unless R λ,m−1 (x) ≥ |z|, Hölder's inequality shows that this expectation is bounded by which is integrable in z. Hence, mE [∆ 2 1,m ] is also bounded.
We use a similar de-Poissonization argument to those seen in (21; 3). Let H n := f, ν ξ λ(n),n and H ′ n := f, µ ξ λ(n) . For this proof, assume that for all n, X n is given by (3.4) and that P λ(n) is coupled to X n by setting P λ(n) = ∪ Nn i=1 {(X i , T i )}, with N n an independent Poisson variable with mean λ(n). Let First we show that as n → ∞, To prove this, note that the expectation in the left hand side is equal to Let ε > 0. By (5.1) and Lemmas 5.1 and 5.2, there exists c > 0 such that for large enough n and all m with λ(n) ≤ m ≤ λ(n) + n 3/4 , where the bound comes from expanding out the double sum arising from the expectation of the squared sum. A similar argument applies when λ(n) − n 3/4 ≤ m ≤ n, and hence the first term in (5.33) is bounded by the expression ≤ n −1 (ε(λ(n) − n) 2 + ελ(n) + cλ(n) 1/2 + c|λ(n) − n|), and so, since ε is arbitrary, the first term in (5.33) tends to zero.

Applications
In There are interesting potential applications of the theory with non translation-invariant ξ to topics in multivariate statistics such as nonparametric density estimation and nonparametric regression (4; 2; 6), but these are not easy to describe briefly.

Germ-grain models
Germ-grain models are a fundamental model of random sets in stochastic geometry; see for example (9; 15; 26). In the germ-grain model, a random subset of R d is generated as the union of sets (X i + T i ) where {X i } (the germs) are the points of a point process, and {T i } (the grains) are independent identically distributed random compact subsets of R d . Our results can be applied to obtain limit theorems for random measures associated with germ-grain models, in the case where the point process of germs is P λ or X n , and where the grains are scaled by a factor of λ −1/d as λ → ∞.
When X is P λ or X n , the set Ξ λ (X ) is a germ-grain model with germs given by a Poisson process or binomial process and grains scaled by a factor of λ −1/d . We can apply our general results to the volume measure of Ξ λ (P λ ) (i.e., the restriction of Lebesgue measure to Ξ λ (P λ )) and the surface measure of Ξ λ (P λ ) (i.e., the restriction of (d − 1)-dimensional Hausdorff measure to the boundary of Ξ λ (P λ )), and likewise for X n .
For t ∈ M, i.e. for t a compact set in R d , let |t| := max{|z| : z ∈ t} and let t denote the Lebesgue measure of t.
Theorem 6.1. Suppose κ is bounded and has bounded support. Suppose for some p > 3 that E [ T p ] < ∞ and there exists C > 0 and q > d(150 + 6/p) such that P [|T | > s] < Cs −q for all s. Then the finite dimensional distributions of the random field converge to those of a centred Gaussian random field with covariances given by σ ξ,κ f,g as defined by (2.14) with ξ to be described in the proof below. Likewise, if |λ(n) − n| = O(n −1/2 ), then the finite dimensional distributions of the random field converge to those of a centred Gaussian random field with covariances given by τ ξ,κ f,g as given by (2.17).
Proof. For finite X ⊂ R d × M, let π(X ) denote the projection of X onto R d , i.e. the subset of R d obtained if we ignore the marks carried by points of X . Also, for each x ∈ π(X ) let T (x) denote the mark carried by x, i.e. the value of t such that (x, t) ∈ X . For y ∈ Ξ 1 (X ) = ∪ (x,t)∈X (x + t), let N X (y) denote the nearest point x ∈ π(X ) to y such that y ∈ x + T (x) (in the event of a tie when seeking the 'nearest point', use the lexicographic ordering as a tie-breaker). Take ξ(x, t; X , ·) to be the restriction of Lebesgue measure to the set of y ∈ x+t such that x = N X (y).
Then, since N X (y) is unique for each y ∈ Ξ 1 (X ), (x,t)∈X ξ(x, t; X , ·) is precisely the volume measure of Ξ 1 (X ). Also, ξ is translation-invariant, so by (2.3), is the volume measure of Ξ 1 (X λ ), which is the same as the volume measure of Ξ λ (X ). Hence, with this choice of ξ, we have that The measure ξ(x, t; X , ·) is supported by x + t, and this measure is unaffected by changes to X outside B 2|t| (x) × M. This is because for any y ∈ x + t it is the case that |y − N X (y)| ≤ |t| so by the triangle inequality, N X (y) lies in B 2|t| (x). Hence, 2|t| serves as a radius of stabilization, which is almost surely finite since M is the space of compact subsets of R d . Also, ξ(x, t; X , R d ) is bounded by t , and the conditions (2.7) and (2.8) follow. We can then apply Theorems 2.2 and 2.3 with Ω λ = R d for all λ, and (6.2) to obtain the result.
One can apply Theorem 2.1 of (19) to the choice of ξ in the preceding proof, to obtain a law of large numbers showing, for example, L 1 convergence of Ξ λ (P λ ) f (x)dx to a deterministic limit, under an L p+ε moments condition on T .
Theorem 6.1 adds to the results for germ-grain models in ( (3), Section 3.3) in several ways. In particular, in (3) it is assumed that the distribution of |T | is supported by a compact interval, whereas here we need only power-law decay of the tail of this distribution. Also, in (3) the term 'volume measure' is used in a non-standard way to refer to an atomic measure supported by the points of X . Our usage of the terminology 'volume measure' seems more natural, and is also in agreement with the standard usage found, for example, in (11; 26).
The literature concerned with central limit theorems for the volume in germ-grain models goes back at least to Mase (13) and also includes Hall (9) Heinrich (10), Molchanov and Stoyan (16), Götze et al. (8), as well as (11). Typically in these works, the method is via a central limit theorem for stationary m-dependent random fields along with truncation of the shapes if unbounded. Our work complements these: we consider non-stationary inhomogeneous Poisson processes P λ , and binomial point processes X n , and our central limit theorem is for the volume measure on R d , not just for its total measure (it is not clear whether the general results in (11) can be applied to the volume measure). On the other hand, in the case of the total volume measure on a homogeneous Poisson process, our result requires moments conditions which are stronger than those in the existing literature, so presumably our moments conditions are also non-optimal in the non-homogeneous case.
To aid comparison with the literature just mentioned, we reformulate Theorem 6.1 in the homogeneous case. Let b > 0 and let W be a fixed convex set of volume b −1 in R d (for example, a cube) with 0 ∈ W . For the purposes of this discussion, suppose that κ is a uniform distribution on W (i.e., κ(x) = b for x ∈ W and κ(x) = 0 otherwise). For λ > 0 set W λ := λ 1/d W (a 'window' of volume λ/b in R d ), and W * λ := W λ × M. Then by (6.1), where H b is as defined in Section 2. Let f ∈B(R d ). Then Thus under the conditions of Theorem 6.1, that result tells us that the random field converges in distribution, as λ → ∞, to a centred Gaussian field with covariances given by where · denotes Lebesgue measure and ξ is as in the proof of Theorem 6.1. By comparison, Mase (13) (see also (16)) has shown that under weaker moments conditions, There is a difference between Ξ 1 (H b ∩ W * λ ) and Ξ 1 (H b ) ∩ W λ due to boundary effects. At least in the case where the distribution of |T | is compactly supported, it can be shown that and it may be possible by truncation arguments to extend the validity of (6.5) to other cases. A comparison of (6.3) and (6.4) yields if(6.5)holds. (6.6) Thus whenever (6.5) holds, the expression given by (6.6) for V ξ (x, b) can be used in Theorem 6.1.
We now consider (for general κ) the surface measure of Ξ λ (X ). By this we mean the (d − 1)dimensional Hausdorff measure restricted to the boundary of Ξ λ (X ). As we shall discuss later, some of the related literature has a different usage of the term 'surface measure'.
We assume here that with probability 1, each grain is a finite union of bounded convex sets.
For (x, t) ∈ X , let (x + t) o and ∂(x + t) denote the interior and boundary, respectively, of the set x + t. For z ∈ ∪ (x,t)∈X (x + t) o , let N * X (z) denote the closest point x ∈ π(X ) to z such that z ∈ (x + T (x)) o , using lexicographic ordering as a tie-breaker (here T (x) is as in the proof of Theorem 6.1).
For (x, t) ∈ X we shall take the contribution of (x, t) to the surface measure on Ξ λ (x) to be the surface measure on the set x + λ −1/d t, restricted to the complement of ∪ (y,t ′ )∈X \{(x,t)} (y + λ −1/d t ′ ) o , thereby ignoring the possibility that for some (y, t ′ ) the intersection of the boundaries of the grains x + λ −1/d t and y + λ −1/d t ′ has non-zero (d − 1)-dimensional Hausdorff measure. This is (almost surely) justified because of the following lemma: and let ψ(dy) be surface measure on ∂(K 1 ). Then for each y ∈ ∂(K 1 ), the set of x ∈ R d with (x, y) ∈ A is Lebesgue-null, so by Fubini's Theorem, Given X , define the set N C(x, t) (i.e., the points for which x is the 'nearest covering' germ) by which is the set of points interior to x + t which are not covered by any set with germ closer than x. Define the set CC(x, t) (the points of ∂(x + t) with 'closer cover') by Let us take ξ(x, t; X , ·) to be the following signed measure: • Let ξ + (x, t; X , ·) be the restriction of (d − 1)-dimensional Hausdorff measure to ∂(x + t) \ CC(x, t).
The signed measure ξ(x, t; X , ·) is supported by x + t and is unaffected by changes to X outside B 2|t| (x) × M. Achieving this is the purpose of the definition of ξ used here, since it ensures that |T | serves as a radius of stabilization.
We assert that (x,t)∈X ξ(x, t; X , ·) is precisely the surface measure of Ξ 1 (X ), almost surely. To see this, suppose z lies on the surface of x + t, but is covered by some other (y + u) o with (y, u) ∈ X (take the closest such y to z). If |y − z| < |x − z|, then z ∈ CC(x, t) so that ξ + (x, t; X , dz) = ξ − (x, t; X , dz) = 0. If |y − z| > |x − z|, then z / ∈ CC(x, t) so that z ∈ N C(y, u) ∩ ∂(x + t) \ CC(x, t), so that ξ + (x, t; X , dz) is the surface measure of ∂(x + t), and ξ − (y, u; X , dz) is also the surface measure of ∂(x + t), and these cancel out. If also z ∈ (w + v) o with (w, v) ∈ X and |w − z| > |y − z|, then z / ∈ N C(w, v) so that ξ − (w, v; X , dz) = 0. Finally, Lemma 6.1 tells us that the total surface measure of points lying on the boundary of two or more grains is almost surely zero.
To apply our general results here, we need the moments conditions such as (2.7) to apply to both the positive and negative parts of the measure ξ. We write |∂T | for the (d − 1)-dimensional Hausdorff measure of the boundary of T . Clearly this is an upper bound for ξ + (x, T ; X , R d ).
To estimate the negative part, observe that all contributions to ξ − (x, T ; X ) come from the boundaries of sets associated with germs within distance at most 2|T | from x. Hence, we have that where X i are independent copies of |∂T |, and given T , N is Poisson with mean θ(2|T |) d , with θ denoting the volume of the unit ball. By Minkowski's inequality, the above expectation is bounded by E [N p ]E [|∂T | p ], and hence the condition suffices to give us all the moments conditions (2.7), (2.8), and (2.15), for both the positive and the negative parts of ξ.
Thus, with this choice of ξ we may apply Theorem 2.1 of (19) (with test functions f ∈B(R d )) if for q = 1 or q = 2 we have (6.7) for some p > q. We may apply Theorems 2.2 and 2.3 (again with test functions f ∈B(R d )), either if (6.7) holds for some p > 2 and P [|T | > r] ≤ Ce −r/C for some C > 0 and all r > 0, or if for some p > 3 and some q > d(150 + 6/p), (6.7) holds and P [|T | > r] ≤ Cr −q for some C > 0 and all r > 0.
Thus under these conditions, we have asymptotic normality, e.g. for n −1/2 f, ν λ(n),n (which is n −1/2 λ(n) (d−1)/d times the integral of f with respect to surface measure of Ξ λ(n) (X n )) with limiting variance τ ξ,κ f,f given by (2.17). This limiting variance is zero if R d f (x)κ(x)dx = 0, and if this integral is non-zero it should be possible to show the limiting variance is non-zero using arguments such as those of ( (3) A limiting Gaussian field for 'surface area measure' is discussed in (11), for homogeneous point processes. The surface area measure in (11) is different from that considered here, being a measure on the product of a window in d-dimensional space with the (d − 1)-dimensional unit sphere. It measures the surface area of all boundary points whose normals belong to any specified set in the unit sphere. If this specified set is taken to be the whole sphere, the result of (11) yields a CLT for the total surface area inside an expanding window.
Taking f ≡ 1 in the result on surface measure of the present paper (obtained by taking κ to be uniform on a cube and scaling space as described earlier in the case of volume measure) yields a similar CLT for total surface area in an expanding window, but under stronger moments conditions than those of (11). By taking other f in our result, we can obtain (for example) a multivariate CLT for the surface areas in a finite collection of simultaneously expanding windows. This is not in (11) but it may well be possible to extend the methods used there to obtain such a multivariate CLT under weaker moments conditions than those we need here. We thank a referee for pointing this out.

Random and cooperative sequential adsorption
The random packing measures discussed in Section 3.2 of (3) are obtained by particles (typically balls) being deposited in d-space at random times, according to a space-time Poisson process. Particles have non-zero volume and (in some versions of the model) may grow with time, but deposition and growth are limited by an excluded volume effect. Motivation (when d = 2) comes from, for example, the modelling of chemical surface coating processes (see (7), and the references in (3; 22)).
In (3), the measures associated with these packing processes are obtained as a sum of unit point masses, with one point for each particle. As in the case of germ-grain models, it is quite natural instead to consider the the volume measure associated with the random set obtained as the union of particles (balls), or even the surface measure of this random set. The setup of this paper enables us to obtain central limit theorems for these measures, but we do not give further details here.
A variant of these deposition processes is the cooperative sequential adsorption process (see e.g. (7)) which goes as follows. Suppose points (particles) arrive sequentially in R d , with locations Y 1 , . . . , Y n independently identically distributed with distribution κ. Each incoming particle Y i is either accepted (irreversibly) or rejected, being accepted with a probability that depends on the relative positions of previously accepted points nearby, i.e. previously accepted points within a distance λ −1/d of the incoming point. Here λ is a scaling parameter which is linked to n in the limit theory to be described. We shall assume that each accepted particle carries a random point mass, the size of which has a distribution depending on the particle's location, and obtain a limit theory for the measure obtained as the sum of resulting point masses.
More precisely, we assume a measurable function φ, taking values in the interval [0, 1], is defined on all finite subsets of B 1 . Given a value of the scaling parameter λ > 0, the set of accepted points after the arrival of Y 1 , . . . , Y m is denoted A m,λ with A 0,λ = ∅. Given that the mth incoming particle arrives at location x ∈ R d (i.e. given Y m = x), the probability of its being accepted is taken to be φ(λ 1/d (−x+A m−1,λ )∩B 1 ). We can use Theorem 2.3 to obtain a Gaussian limit for the associated random point measure, as follows.
To translate this into our general setting, set M = [0, 1] 3 , with µ M the uniform distribution so each point x ∈ R d comes with a mark T = (U, U ′ , U ′′ ) with U, U ′ , U ′′ independent and uniformly distributed over [0, 1]. The first mark U represents the 'time of arrival', so the points X 1 , . . . , X n of X n are are ordered in order of increasing mark U to obtain the ordered sequence (Y 1 , . . . Y n ). The second mark U ′ is used to achieve the randomization which determines whether the point is accepted or not. That is, if the mth point Y m is at location x and carries second mark U ′ , it is accepted if and is rejected otherwise. The third mark is used to determine the value of the weight of the point mass at an accepted point. We assume there is a constant K > 0 and, for each x ∈ R d , a function ψ x : [0, 1] → [0, K] such that the weight of an accepted point at x carrying third mark U ′′ is obtained as ψ x (U ′′ ).
We take ξ((x, T ); X ) to be a point mass of size ψ x (U ′′ ) at x if x is accepted and 0 otherwise. So A1 holds. If x → ψ x (t) is almost everywhere continuous on R d for almost all t ∈ [0, 1], then A3 holds.
Assume κ is bounded. The homogeneous and exponential stabilization conditions all hold because of arguments in (22). The moments condition of A4 holds since we have assumed all point masses are bounded by a constant K. Thus we can apply Theorem 2.3 to obtain the following central limit theorem, where here the point measure ν ξ λ,n is the sum of point masses at each point of A n,λ . Theorem 6.2. Suppose κ ∞ < ∞ and κ has bounded support. Suppose x → ψ x (t) is almost everywhere continuous on R d for almost all t ∈ [0, 1]. Let f ∈ B(R d ). Then for any sequence (λ(n), n ∈ N) taking values in (0, ∞), such that lim sup n→∞ n −1/2 |λ(n)−n| < ∞, we have as n → ∞ that n −1 Var( f, ν ξ λ(n),n ) → τ ξ,κ f,f as defined at (2.17), and n −1/2 f, ν ξ This example is translation-invariant if ψ x does not depend on x; otherwise it is not. It may be possible to obtain similar results to ours using Theorem 2.5 of (3) but this appears to need stronger conditions on ψ x , and also is restricted to continuous test functions.

Voronoi tessellations and nearest neighbour graphs
Throughout this section we set Ω = supp(κ) and set Ω λ to be the interior of Ω for all λ. Also we assume that Ω is convex and bounded, that the restriction of κ to Ω is bounded away from zero and infinity.
Given a finite point set X ⊂ R d , the Voronoi cell of point x ∈ X , relative to X , is given by the set of all y ∈ R d lying closer to x in the Euclidean sense, than to any other point of X . We denote this Voronoi cell by C(x; X ). Each such cell is the intersection of finitely many half-spaces, and the Voronoi cells form a tessellation of R d . The (d − 1)-dimensional Hausdorff measure on the union of the boundaries of the cells C(x; X ), x ∈ X is a σ-finite measure on R d ; we consider its restriction to Ω, which is a finite measure, for general d ≥ 2.
In the case d = 2, the union of the boundaries of the cells C(x; X ), x ∈ X is called the Voronoi graph; some of its edges are infinite. In (21) a central limit theorem (CLT) is given for the total length of the finite edges of the Voronoi graph on randomly distributed points; in (3) this is extended to a CLT for a measure obtained the as the union of point masses at the points x of X equal to the sum of the lengths of those edges of the Voronoi graph which are finite and form part of the boundary of C(x; X ). However, both of these results rely on Lemma 8.1 of (21) which seems to be erroneous.
Using the results of this paper we can obtain a CLT for the (d − 1)-dimensional Hausdorff measure of boundaries of Voronoi cells, acting on test functions inB(Ω), without relying on the incorrect lemma from (21). Our result adds to the law of large numbers and of McGivney and Yukich (14), and normal approximation result for d = 2 given by Avram and Bertsimas (1). Both (14) and (1) consider only the total length of the Voronoi graph in Ω with d = 2 and Ω the unit square; (1) specializes further to the case of Poisson samples with κ a uniform distribution over Ω, and does not show convergence of the variance.
We take ξ(x, X , ·) to be half the (d − 1)-dimensional Hausdorff measure on the boundary of C(x; X ). This is translation-invariant, and by (2.3), so that µ ξ λ given by (2.4) is λ (d−1)/d times the 1-dimensional Hausdorff measure on G(P λ ), and ν ξ λ,n is λ (d−1)/d times the 1-dimensional Hausdorff measure on G(X n ). With our conditions on κ, this choice of ξ is both homogeneously and exponentially stabilizing.
To show exponential stabilization one needs to take care over points near the boundary of Ω, so we give further details.
Let C i , 1 ≤ i ≤ I, be a finite collection of infinite open cones in R d with angular radius π/12 and apex at 0, with union R d . For x ∈ Ω and 1 ≤ i ≤ I, let C i (x) be the translate of C i with apex at x. Let C + i (x) be the open cone concentric to C i (x) with apex x and angular radius π/6. Suppose A ⊂ R d , and suppose X ⊂ R d is finite with x ∈ X ∩ A. We construct an upper bound on R(x; X , A) as follows. For 1 ≤ i ≤ I, let R i (x; X , A) be the distance from x to the closest point in X ∩ C + i (x) if such a point exists and this distance is less than diam(C i (x) ∩ A); otherwise set R i (x; X , A) := diam(C i (x) ∩ A). Set R + (x, X , A) := max 1≤i≤I R i (x; X , A).
If y ∈ C + i (x) ∩ X then by elementary geometry V (x; X ) ∩ C i (x) ⊆ B |y−x| (x). Hence, Any point added to or removed from X outside B 2R + (x,X ,A) (x) affects V (x; X ), if at all, at a distance of at least R + (x, X , A) from x, and hence by (6.8), does not affect V (x; X ) ∩ A ∩ C i (x). Hence, changes to X at a distance greater than 2R + (x, X , A) from x do not affect the restriction of the measure ξ(x; X ) to A. Hence, changes to X at a distance greater than 2R + (x, x+X , x+A) from 0 do not affect the restriction of the measure ξ(x; x + X ) to x + A. It follows that In particular, setting A = R d and X = H λ we see from (6.9) that ξ is λ-homogeneously stabilizing (see Definition 2.3).
We shall use the next lemma to establish the moments conditions (2.7), (2.8) and (2.15). Proof. For each facet F of A, and x ∈ F , let π F (x) be the projection of x, orthogonally to F and away from A, onto the boundary of the cube [0, 1] d . In other words, let π F (x) denotes the point at which the half-line starting at x, orthogonal to F and not meeting A in any point other than x, meets the boundary of [0, 1] d . Let π F (F ) := ∪ x∈F π F (x). Then µ d−1 (F ) ≤ µ d−1 (π F (F )), and we assert that for distinct facets F and F ′ , if x (respectively x ′ ) is in the interior of F (respectively F ′ ) then π F (x) = π F ′ (x ′ ). The result then follows.
To justify the above assertion, suppose it failed for some x and x ′ as described. Set y := π F (x) = π F ′ (x ′ ). Then the line xx ′ is at an acute angle to at least one of the lines xy and x ′ y, and therefore the line segment xx ′ passes outside the region bounded by A (because F and F ′ are supporting hyperplanes), but this contradicts the convexity assumption.
Using Lemma 6.2 along with (6.8), we see that for convex A ⊂ R d there is a constant C depending only on d such that ξ(x; X , A) ≤ CR + (x, X , A) d−1 and then using (6.10) and (6.11) with the tail estimates (6.12) and (6.13), we may deduce the moments conditions (2.7), (2.8) and (2.15). In short, conditions A2 and A4 ′ (though not A1) hold here, along with homogeneous stabilization. Thus both Theorems 2.2 and 2.3 are applicable, with the following conclusion.
We now turn to the k-nearest neighbour graph. Given k ∈ N, the (undirected) k-nearest neighbour graph is defined as follows. For finite X ⊂ R d with distinct inter-point distances, let G(X ) denote the graph on vertex set X obtained by placing an undirected edge between each pair of vertices x and y such that either y is one of the k nearest neighbours of x in X or x is one of the k nearest neighbours of y in X .
Nearest-neighbour type graphs have a variety of applications; for discussion, see e.g. Wade (27). Limit theorems for the k-nearest neighbour graphs have been considered in e.g. (3; 18; 21; 23; 27), along with a variety of other graphs on random points, such as directed k-nearest neighbour graphs, sphere of influence graphs and Delaunay graphs. We add here to the known results on k-nearest neighbour graphs, and could also similarly consider these other graphs.
In (3) a general CLT is given for a measure obtained as the union of point masses at the points x of X equal to the sum of the lengths of edges of G(X ) incident to x. Using the results of this paper we can extend this result to a larger class of test functions. Also, we can consider other measures than point measures associated with these graphs, and we now consider a natural choice of non-point measure. We take ξ(x; X , ·) to be half the 1-dimensional Hausdorff measure on the edges of G(X ) incident to x. This is translation-invariant, and by (2.3), so that µ ξ λ (respectively ν ξ λ,n ) given by (2.4) is λ 1/d times the 1-dimensional Hausdorff measure on G(P λ ) (respectively G(X n )).
This choice of ξ is translation-invariant, so satisfies A2. It does not satisfy A1 so we have to restrict attention to a.e. continuous test functions (still a larger class of test functions than in (3)).
With our conditions on κ, the choice of ξ is both homogeneously and exponentially stabilizing. We give details of the latter, which are not in the literature.
In the case k = 1, let R i (x, X , A) (1 ≤ i ≤ I) be defined exactly as in the Voronoi example. For larger values of k, modify the definition of R i (x, X , A) by taking it to be the distance from x to its kth nearest neighbour in the set C + i (X ) ∩ A if this kth nearest neighbour exists at a distance less than diam(A ∩ C i (x)) from x, and otherwise take R i (x, X , A) := diam(A ∩ C i (x)). As before, set R + (x; X , A) := max 1≤i≤I R i (x; X , A).
We assert that an upper bound for R(x; x + X , x + A, x + A) is given by ⌈R + (x; X , A)⌉. Indeed, consider the addition or removal of a point outside of B R + (x;X ,A)(x) but inside A. When R i (x, X , A) = diam(A ∩ C i (x)), the set A ∩ C i (x) \ B R + (x;X ,A)(x) is empty. So such an addition or removal must take place in some C i (x) with R i (x, X , A) < diam(A ∩ C i (x)), in which case there are at least k points of X ∩ C i (x) ∩ A at a distance less than R + (x; X , A) from x, and by elementary geometry these will all lie closer than x does to the added/removed point. Hence the addition or removal does not affect the set of edges incident to x in the k-nearest neighbour graph. Then the assertion follows in the same manner as the corresponding assertion (6.9) for the Voronoi example above.
Using (6.12) and (6.13) we can then establish the exponential stabilization and binomial exponential stabilization we need to apply Theorem 2.3 and obtain the following CLT for measures: Theorem 6.4. Let ξ(x; X ) be half the 1-dimensional Hausdorff measure on the edges incident to x in the k-nearest neighbour graph on X . Let f ∈B(Ω). Then for any sequence (λ(n), n ∈ N) such that lim sup n→∞ n −1/2 |λ(n) − n| < ∞, we have as n → ∞ that n −1 Var( f, ν ξ λ(n),n ) → τ ξ,κ By the Mapping Theorem (12), P ′ λ has the same distribution as P λ while H ′ κ(x) has the same distribution as H κ(x) .
Proof of Lemma 3.2. In this proof we write simply λ for λ(k), ℓ for ℓ(k), and m for m(k). We use the following coupling. Suppose we are given λ. On a suitable probability space, let P and P be independent copies of P λ , independent of X 1 , X 2 , . . ..
Let P ′ be the point process in R d × M consisting of those points (V, T ) ∈ P such that |V − x| < |V − y|, together with those points (V ′ , U ′ ) ∈P with |V ′ − y| < |V ′ − x|. Clearly P ′ is a Poisson process of intensity λκ × µ M on R d × M.
Assuming λ is so large that |x − y| > 2λ −1/d K, if E ∩ F occurs then Combining these, we have the required convergence in distribution.
Before proving Lemma 3.3 we need to give two further lemmas which are also taken from (19).
Since Ω 0 ⊆ Ω ∞ ⊆ Ω and Ω ∞ is open, for any K > 0 we have for large enough m that where the convergence follows from (7.2). By assumption ξ x,T ∞ (H κ(x) , R d ) is almost surely finite, so that the limit in (7.6) itself tends to zero in probability as K → ∞, and therefore ξ λ(m) (x; X m , R d \ Ω) P −→ 0 as λ → ∞. Combining this with the case A = R d of (7.2) and using Slutsky's theorem (see, e.g., (17)) we obtain (7.4).