Asymptotic results for multivariate estimators of the mean density of random closed sets

: The problem of the evaluation and estimation of the mean density of random closed sets in R d with integer Hausdorﬀ dimension 0 < n < d , is of great interest in many diﬀerent scientiﬁc and technological ﬁelds. Among the estimators of the mean density available in literature, the so-called “Minkowski content”-based estimator reveals its beneﬁts in applications in the non-stationary cases. We introduce here a multivariate version of such estimator, and we study its asymptotical properties by means of large and moderate deviation results. In particular we prove that the estimator is strongly consistent and asymptotically Normal. Furthermore we also provide conﬁdence regions for the mean density of the involved random closed set in m ≥ 1 distinct points x 1 ,...,x m ∈ R d .


Introduction
A random closed set Θ n of locally finite n-dimensional Hausdorff measure H n induces a random measure μ Θn (A) := H n (Θ n ∩ A), A ∈ B R d , and the corresponding expected measure is defined as where B R d is the Borel σ-algebra of R d .(For a discussion of the measurability of the random variables μ Θn (A), we refer to [5,27].)Whenever the measure E[μ Θn ] is absolutely continuous with respect to the d-dimensional Hausdorff measure H d , its density (i.e. its Radon-Nikodym derivative) with respect to H d is called mean density of Θ n , and denoted by λ Θn .
Examples of random closed sets with integer Hausdorff dimension n less than d are fiber processes, boundaries of germ-grain models, n-facets of random tessellations, and surfaces of full dimensional random sets.The problem of the evaluation and estimation of the mean density λ Θn has been of great interest in many different scientific and technological fields over the last decades.Recent areas of interest include pattern recognition and image analysis [23,18], computer vision [24], medicine [1,12,13,14], material science [11], etc. (see [8] for additional references).
The estimation of the mean density of non-stationary random sets turns out to be much more difficult both from the theoretical and the applied point of view.With regard to this, an explicit formula for λ Θn (x) (see Eq. ( 2)) and different kinds of estimators have been proposed in recent literature (see [26,8,9] and references therein).One of these, named "Minkowski content"-based estimator and denoted by λ μ,N Θn (x) (see Eq. ( 4)), turns out to be asymptotically unbiased and weakly consistent, and it reveals its benefits in applications in the non-stationary cases.Indeed, the evaluation of the "Minkowski content"-based estimator in a generic point x ∈ R d does not require any particular calculation, except for counting how many elements of the random sample of Θ n have no void intersection with the ball centered at x. Whenever a random sample for the involved random closed set Θ n is available, the feasibility of such an estimator is now apparent: in fact its computation reduces to check whether any pixel corresponding to the ball belongs to the sample of Θ n or not in its digital image.This is the reason why such an estimator deserves to be studied in great detail.
In this paper we consider a multivariate version of the estimator in [8,10], and we study its rate of convergence to the theoretical mean density in the fashion of the large deviations theory.We remind that such a theory gives an asymptotic computation of small probabilities on an exponential scale (see e.g.[17] as a reference on this topic); here, among the references in the literature, we remind that some large deviation results for the empirical volume fraction for stationary Poisson grain models can be found in [19].
We consider multivariate estimators, i.e. estimators of the density in m points x 1 , . . ., x m , and this allows to have a richer asymptotic analysis for the estimation of the density.In fact finite sets of points can be used to estimate the entire density function λ Θn (x).To better explain this one could try to adapt some large deviation techniques (see for instance Dawson Gärtner Theorem, i.e.Theorem 4.6.1 in [17]) which allow to lift a collection of large deviation principles in "small" spaces into a large deviation principle on a "large" space.
We remark that in [10] the authors proved the weak consistency of the "Minkowski content"-based estimator in the univariate case.In this paper we prove the strong consistency and the asymptotic Normality for the multivariate estimator.These two issues will be a consequence of some standard arguments in large deviations: see Remark 2 for the strong convergence of sequences of random variables which satisfy the large deviation principle, and Remark 11 which illustrates how the proof of a moderate deviation result can be adapted to prove the weak convergence to a centered Normal distribution.
More specifically we will consider the multivariate "Minkowski content"-based estimator λ μ,N := ( λ μ,N Θn (x 1 ), . . ., λ μ,N Θn (x m )), of the mean density λ Θn := (λ Θn (x 1 ), . . ., λ Θn (x m )) in m ≥ 1 distinct points x 1 , . . .x m of R d .We remark that, given an i.i.d.random sample Θ (1) n , . . ., Θ (N ) n for Θ n , the estimator λ μ,N Θn (x) in [8,10] is of the type 1 is the ball centered in x with radius r N , and w N is a suitable normalization which depends on the bandwidth r N .Thus, as far as the multivariate estimator is concerned, in general two distinct random variables 1 Θ (i) n ∩B r N (x j ) =∅ and 1 Θ (i) n ∩B r N (x k ) =∅ (with j = k) are not independent, and therefore we have a m-dimensional estimator with (possibly) dependent components.We point out that even in the simpler case m = 1, results on the strong consistency and the convergence in law of the "Minkowski content"based estimator are not still available in literature.Here we directly deal with the general multivariate vector of dimension m in order to derive also confidence regions for the whole vector (see Section 4.2); of course the univariate case m = 1 will be seen as a particular case.
In Section 3 we present the asymptotic results for where and Y N is a random vector with (possibly) dependent Bernoulli distributed components.Then, for some arbitrary deterministic quantity w N such that w N → ∞ as N → ∞, we present a condition (see Eq. ( 10)) on the bivariate joint distributions of pairs of components of Y N which allows us to prove large and moderate deviations for S N /w N (as N → ∞).As a byproduct we get an asymptotic Normality result (Theorem 10).
In Section 4 the results are applied to the multivariate estimator λ μ,N under quite general sufficient conditions on Θ n (obviously in these applications w N is chosen in terms of the bandwidth r N as we said above).In particular we shall see that the sequence { λ μ,N : N ≥ 1} satisfies a large deviation principle, from which it is possible to gain information on its convergence rate and then to deduce a strong consistency result for λ μ,N (see Corollary 15).In Section 4.2 we also find a confidence region for λ Θn ; this will be done by considering the asymptotical distribution of λ μ,N when an optimal bandwidth r N is chosen.This result will play a pivotal role to get confidence regions for λ Θn .

Preliminaries
In this section we recall some preliminaries on large deviations and on stochastic geometry, useful for the sequel.We shall refer to literature and previous works for a more exhaustive treatment.

Large deviations
We start with some basic definitions and we refer to [17] for additional details.Let Z be a topological space.Then a sequence of Z-valued random variables {Z N : N ≥ 1} satisfies the large deviation principle (LDP for short) with speed v N and rate function for all open sets O, and lim sup for all closed sets C. A rate function I is said to be good if all its level sets {{z ∈ Z : The main large deviation tool used in the proofs of this paper is the Gärtner Ellis Theorem (see e.g.Theorem 2.3.6 in [17]).In this theorem we have Z = R m for some m ≥ 1; thus, from now on, we set a • b := m j=1 a j b j for two generic vectors a = (a 1 , . . ., a m ) and b = (b 1 , . . ., b m ) of R m .Before recalling its statement we remind that a convex function f : R m → (−∞, ∞] is said to be essentially smooth (see e.g.Definition 2.3.5 in [17]) if the interior Theorem 1 (Gärtner Ellis Theorem).Let {Z N : N ≥ 1} be a sequence of R mvalued random variables such that there exists the function Λ : R m → [−∞, ∞] defined by Assume that the origin 0 = (0, . . ., 0) ∈ R m belongs to the interior of In our applications we always have D Λ = R m , and therefore the steepness always holds vacuously.In the following remark we briefly discuss the convergence of {Z N : N ≥ 1} in Theorem 1.
Remark 2. One can check that, when we can apply the Gärtner Ellis Theorem, the rate function Λ * (y) uniquely vanishes at y = y 0 , where y 0 := ∇Λ(0).Then, if we consider the notation we have Λ * (B c δ (y 0 )) := inf y∈B c δ (y0) Λ * (y) > 0 and, for all η such that 0 < η < Λ * (B c δ (y 0 )), there exists N 0 such that (this is a consequence of the large deviation upper bound for the closed set C = B c δ (y 0 )).Thus we can say that Z N converges in probability to y 0 as N → ∞; moreover the convergence is almost sure if N ≥1 exp (−v N (Λ * (B c δ (y 0 )) − η)) < ∞ by a standard application of the Borel Cantelli lemma.

Random closed sets
To lighten the presentation we shall use similar notation to previous works [8,25,26]; in particular, for the reader's convenience, we refer to [26, Section 2] (and references therein) for the mathematical background and more details on the Minkowski content notion and marked point process theory.
We remind here that, given a probability space (Ω, F, P), where F denotes the class of the closed subsets in R d , and σ F is the σ-algebra generated by the so called Fell topology, or hit-or-miss topology, that is the topology generated by the set system where G and C are the system of the open and compact subsets of R d , respectively, while F G := {F ∈ F : F ∩ G = ∅} and F C := {F ∈ F : F ∩ C = ∅} (e.g., see [22]).
By means of marked point processes, every random closed set Θ in R d can be represented as a germ-grain model as follows where Ψ = {(ξ i , S i )} i∈N is marked point processes in R d with marks in a suitable mark space K so that Z i = Z(S i ), i ∈ N, is a random set containing the origin (i.e., Z : K → F).
To set the notation, we denote by Θ n any random closed set in R d with Hausdorff dimension n, by discf the set of the discontinuity points of any function f , by b n the volume of the unit ball in R n .We also recall that the parallel set (or, equivalently, the Minkowski enlargement) of A ⊂ R d at distance r > 0 is the set defined as g., see [3] and references therein for a more exhaustive treatment).
As mentioned in the Introduction, whenever the measure E[H n (Θ n ∩ • )] on R d is absolutely continuous with respect to H d , we denote by λ Θn its density, and we call it the mean density of Θ n .It has been proved [26,Proposition 5] that any random closed set Θ n in R d with Hausdorff dimension n < d as in (1) has mean density λ Θn given in terms of the intensity measure Λ of its associated marked point process Ψ as follows: for H d -a.e x ∈ R d , where −Z(s) is the reflection of Z(s) at the origin.
In the sequel we will assume that an i.i.d.random sample Θ n , . . ., Θ (N ) n is available for the random closed set Θ n .An approximation of the mean density based on the H d -measure of the Minkowski enlargement of the random set in question has been provided in [2,26] under quite general regularity conditions on the grains of Θ n and on the functions f and g introduced above: Theorem 3 ([26, Theorem 7]).Let Θ n be as in (1) such that the following assumptions are satisfied: for some γ > 0 independent of y and s; for some ξK (s for some ξ a,K (s, y, t) with Remark 4. The measure H n (Ξ(s) ∩ •) in (A1) plays the same role as the measure ν of Theorem 2.104 in [3]; indeed (A1) might be seen as its stochastic version, and it is often fulfilled with Ξ(s) = ∂Z(s) or Ξ(s) = ∂Z(s) ∪ A for some sufficiently regular random closed set A (see also [25,Remark 3.6], and [26,Example 1]).Roughly speaking, such an assumption tells us that each possible grain associated to any point of the underling point process {ξ i , K} i∈N is sufficiently regular, so that it admits n-dimensional Minkowski content; this explains also why requiring the existence of a constant γ as in (A1) independent on y and s is not too restrictive.Note that the condition K H n (Ξ(s))Q(ds) < ∞ means that the H n -measure of the grains is finite in mean.The role of Assumptions (A2) and ( A3) is more technical, in order to guarantee an application of the dominated convergence theorem in the proof of the theorem.We may also notice that if Z(s) has a bounded diameter (i.e., diam(Z(s)) ≤ C ∈ R for Q-a.e. s ∈ K), or if f and g are bounded, then Assumptions (A2) and (A3) simplify (see also Remark 9 in [26]).
The above assumptions imply then Eq. (3), which is obtained by a stochastic local version of the n-dimensional Minkowski-content of Θ n .For the interested reader we refer to [26] for a more detailed discussion on this.
As a byproduct, given an i.i.d.random sample {Θ (i) n } i∈N of Θ n , the following "Minkowski content"-based estimator of λ Θn (x) has been proposed: where r N is the bandwidth, which depends on sample size N .It is easily checked that λ μ,N Θn (x) is asymptotically unbiased and weakly consistent, for

General results for Bernoulli variables
We consider a triangular array {Y i,N : 1 ≤ i ≤ N } of random variables defined on some probability space (Ω, F, P) and taking values in (R m , B(R m )), with m ≥ 1.We assume that each random vector , and that where the components of N ) are (possibly) dependent Bernoulli distributed random variables; more precisely we mean Y (j) The aim is to prove asymptotic results for the sequence where {w N : N ≥ 1} is a sequence of positive numbers such that: lim w N = λ j for some λ j ∈ (0, ∞), for all j ∈ {1, . . ., m}; lim Remark 5. (i) If m = 1, Eq. ( 10) can be neglected and some parts of proofs presented below are simplified.
(ii) If Y N has independent components, for j, k ∈ {1, . . ., m} with j = k we have N P(Y thus condition (10) is clearly satisfied by taking into account condition (9) and that lim n→∞ p (k) N = 0 (this limit is a consequence of condition (9) and the second limit in (8)).

F. Camerlenghi et al.
In view of the following results we introduce some further notation: S N := N i=1 Y i,N , and therefore {S N /w N : N ≥ 1} coincides with the sequence in (7); We start with the large deviation principle for the sequence in (7).Theorem 6.Let {Y i,N : 1 ≤ i ≤ N } be a triangular array of random vectors as above (thus (6), ( 8), ( 9) and (10) hold).Then {S N /w N : N ≥ 1} satisfies the LDP with speed function v N = w N and good rate function I m (•; λ) defined by I m (y; λ) := m j=1 I(y j ; λ j ), where I(y j ; λ j ) is as in (11).Proof.We want to apply Gärtner Ellis Theorem and we have to prove that lim In fact (12) yields the desired LDP with good rate function J m defined by and, since we can take m sequences {γ and by letting h → ∞.
In the remaining part of the proof we show the validity of (12).By (6) we have where M Y N is the moment generating function of Y N ; moreover, if we set A j,N := {Y (j) N = 0} (for N ≥ 1 and j ∈ {1, . . ., m}), we have We remark that lim and, for all ≥ 1, the terms of the type Therefore we may claim that Let us observe now that, for ≥ 2, lim by the inclusion-exclusion formula (see [7, pg. 24]), and therefore lim Note that, with some slight changes, one can also prove that lim Finally, by substituting in (15) the expression of M Y N in (14), and by taking into account ( 16), we have lim where lim by (17), and e γj λ j (as N → ∞) by ( 18) and ( 17).

Remark 7. Assume that m = 1 for simplicity, and let us denote the Poisson distribution with mean λ by P(λ). It is well-known that, if lim N →∞ Np
(1) i,N converges weakly to P(λ 1 ) (as N → ∞).Then Theorem 6 (for m = 1) describes what happens if we divide both Np  N /w N converges to λ 1 (as N → ∞) in probability (see Remark 2) and, for the rate function I 1 (•; λ 1 ) in Theorem 6, we can say that I 1 (y 1 ; λ 1 ) is the relative entropy of P(y 1 ) with respect to P(λ 1 ) (when y 1 ≥ 0).Now we prove the moderate deviation result for the sequence in (7); a brief explanation of this terminology is given in Remark 11.
Theorem 8. Consider the same assumptions of Theorem 6.Then, for any sequence of positive numbers {a N : N ≥ 1} such that lim N →∞ a N = 0 and .Proof.We want to apply Gärtner Ellis Theorem and we have to prove that lim In fact (19) yields the desired LDP with good rate function Jm defined by Jm (y) := sup and, since we can take m sequences {γ and by letting h → ∞.
In the remaining part of the proof we show that (19) holds.In what follows we consider some notation already introduced in the proof of Theorem 6, i.e.M Y N is the moment generating function of Y N and we set A j,N := {Y (j) N = 0} (for N ≥ 1 and j ∈ {1, . . ., m}).By (6) and after some computations we get where p N = (p where The product inside the expected value in the above equation may be rewritten as Now, by taking the expectation of the previous expression and by substituting e γj i / √ w N a N − 1 with its asymptotic expansion, we get Then we may claim that lim because, as N → ∞, we have Now, returning to consider the expression in (20), and by considering a first order Taylor expansion of log 1 + E exp By replacing now the expression given in (21) it follows Now we remark that the first terms discarded for R N , which concern as N → ∞, for all j 1 , j 2 ∈ {1, . . ., m}; therefore R N = o 1 a N N .Moreover, by (10), for all ≥ 2 we have and then In conclusion we obtain as N → ∞, by ( 9).

Remark 9. The hypothesis a
On the other hand this condition is not needed to prove the limit (19); in fact, as it happens in the proof of Theorem 10 below, the limit (19) holds even if a N = 1 for all N ≥ 1.
We conclude with an asymptotic Normality result.In view of this we use the symbol N m (0, Σ) for the centered m-variate Normal distribution with covariance matrix Σ, where Σ is the diagonal matrix with entries λ 1 , . . ., λ m , i.e.
Theorem 10.Consider the same assumptions of Theorem 6.Then Proof.The proof is a consequence of the following limit lim i.e. (19) with a N = 1 for all N ≥ 1.This limit holds because all the computations (to get (19) in the proof of Theorem 8) still work well even if a N = 1 for all N ≥ 1.

Remark 11. Typically moderate deviations fill the gap between a convergence to zero (or the null vector) of centered random variables, and a weak convergence to a (possibly multivariate) centered Normal distribution. This is what happens
in Theorem 8 for : N ≥ 1 : we mean the convergence to the null vector of R m when a N = 1/w N for all N ≥ 1 (thus a N → 0 holds, but a N w N → ∞ fails), which is yielded by Theorem 6, and the weak convergence to N m (0, Σ) when a N = 1 for all N ≥ 1 (thus a N w N → ∞ holds, but a N → 0 fails) in Theorem 10.

Applications in Stochastic Geometry
In this section we apply the general results showed in the previous one to a multivariate version of the "Minkowski content"-based estimator defined in ( 4), say λ μ,N .Namely, in Section 4.1 we first provide a sufficient condition to guarantee Eq. ( 10); then a series of results on λ μ,N will follow as corollaries of the theorems above, among them the strong consistency of λ μ,N .In Section 4.2 we study the asymptotical distribution of λ μ,N to get confidence regions for the m-dimensional vector (λ Θn (x 1 ), . . ., λ Θn (x m )) of mean densities of Θ n in (x 1 , . . ., x m ) ∈ (R d ) m .It will emerge the importance of choosing a suitable optimal bandwidth r N satisfying condition ( 5), the same for any component of the vector (λ Θn (x 1 ), . . ., λ Θn (x m )).

Statistical properties of the "Minkowski content"-based estimator
Let Θ n be a random closed set with integer Hausdorff dimension n < d in R d , satisfying the assumptions of Theorem 3, and let Θ n , . . ., Θ (N ) n be an i.i.d.random sample for Θ n .We consider the "Minkowski content"-based estimator λ μ,N Θn (x) for the mean density of Θ n at a point x ∈ R d , defined in (4), with bandwidth r N satisfying condition (5).Given m distinct points x 1 , . . ., x m ∈ R d , we can define the multivariate "Minkowski content"-based estimator Since λ μ,N Θn (x j ) is asymptotically unbiased and weakly consistent for λ Θn (x j ), as j ∈ {1, . . ., m}, we have that λ μ,N is a good estimator for λ Θn := (λ Θn (x 1 ), . . ., λ Θn (x m )).
Moreover, observe that λ μ,N can be rewritten in the following way where we set
In particular N ), and p (j) N := P(x j ∈ Θ n⊕r N ).Observe that condition (8) immediately follows by (5), whereas ( 9) is fulfilled for a.e.x j ∈ R d by replacing x with x j in Eq. ( 3).Then, in order to apply the results proved in Section 3 with the aim to infer on λ μ,N , it remains to show that also assumption (10) is satisfied.
The next lemma provides a sufficient condition on the random closed set Θ n for the validity of (10).Note that the condition (24) in the statement is satisfied if the points x i , . . ., x m and Q-almost every realization Z(s) of the grains of Θ are such the ( is the boundary of any ball.By taking into account that usually Q is assumed to be continuous in applications, it is quite intuitive that the condition ( 24) is generally fulfilled for any fixed m-tuple of points (x 1 , . . ., x m ) ∈ (R d ) m .

Lemma 12. Let Θ n be a random closed set as above. If for a m-tuple of points
then (10) is satisfied.
Proof.Define the set A j := {x j ∈ Θ n⊕r } for every j = 1, . . ., m; then we have to prove that We introduce the random variables W (j) r counting the number of enlarged grains which cover the point x j , namely To lighten the notation, without loss of generalization, we will prove (25) with j = 1 and k = 2.To do this, let us observe that Since the marginal process of Φ is simple (i.e., every point y i has multiplicity one), the expectation on the right hand side of (26) can be written as follows: A direct application of Campbell's theorem (e.g., see [4, p. 28]) implies f (y, s)dyQ(ds).
Theorem 3.5 in [25] applies to both (x j − Z(s)) ⊕r and [(x 1 − Z(s)) ∪ (x 2 − Z(s))] ⊕r ; moreover Assumptions (A1) and (A2) (see Theorem 3), together with the dominated convergence theorem, allow us to claim that We argue similarly for the term E 2 in (27).By the definition of second factorial moment measure (e.g., see [15]) we have ) is a lower dimensional set for any s ∈ K by (A1); besides, thanks to assumptions (A1) and (A3), Theorem 3.5 in [25] applies now with while, for all (y 2 , s 1 , s 2 ) ∈ R d and r ≤ 1, The last inequality follows by Remark 4 in [26] which guarantees that Since the dominating function in (28) has finite integral in K 2 × R d by (A3), the dominated convergence theorem can be applied to conclude that: Then the assertion follows.
By combining now Lemma 12 and the results in Section 3, we easily get some asymptotic results for the multivariate "Minkowski content"-based estimator.
Corollary 13 (LDP).Let Θ n be a random closed set with integer Hausdorff dimension n d in R d as above, satisfying (24) for some (x 1 , . . ., x m ) ∈ (R d ) m , and let λ μ,N be the multivariate "Minkowski content"-based estimator defined in (23).
Then Finally, by remembering that in this section w N = Nb d−n r d−n N , p N = (P(x 1 ∈ Θ n⊕r N ), . . ., P(x m ∈ Θ n⊕r N )) and Σ is the diagonal covariance matrix defined in (22), the result on moderate deviations and on asymptotic Normality proved in Theorem 8 and in Theorem 10, respectively, may be stated for the multivariate "Minkowski content"-based estimator as follows.

Confidence regions for the mean density
We mentioned that the evaluation and the estimation of the mean density of a random closed set is a problem of great interest in Stochastic Geometry.After having showed that λ μ,N is asymptotically unbiased and consistent for λ Θn , a natural problem is now to find confidence regions for λ Θn at certain fixed level α, given an i.i.d.random sample Θ (1) n , . . ., Θ (N ) n for Θ n .The asymptotic Normality result derived in the previous section will help us in finding out a suitable statistics (see Theorem 21 below).It is worth noticing that it will be crucial to choose a suitable bandwidth r N for λ μ,N .In the univariate case (m = 1) such a bandwidth turns out to be the optimal bandwidth r o,AM SE N (x) defined in [10] as the value which minimizes the asymptotic mean square error (AMSE) of λ μ,N Θn (x): This is the reason why, in order to propose confidence regions for its mean density, we shall introduce further regularity assumptions on the random set Θ n and a common optimal bandwith (see (30)) associated to the multivariate estimator λ μ,N .For the sake of completeness we recall some basic results on r o,AM SE N (x) proved in [10], and we refer there for further details.For the reader's convenience, we shall use the same notation of [10].
The mean square error MSE( λ μ,N Θn (x)) of λ μ,N Θn (x) is defined, as usual, by A Taylor series expansion for the bias and the variance of λ μ,N Θn (x) provides an asymptotic approximation of the mean square error, and then, by the definition given in (29), an explicit formula for r o,AMSE N (x) is obtained (see Theorem 19 below).To fix the notation, in the sequel α := (α 1 , . . ., α d ) will be a multi-index of N d 0 ; we denote Moreover, we denote by reach(A) the reach of a compact set A ⊂ R d , and by Φ i (A • ), i = 0, . . ., n its curvature measures (we refer to [10,Appendix] and references therein for basic definitions and results on sets with positive reach and on curvature measures).An optimal bandwidth r o,AM SE N (x) has been obtained for random sets as in (1) satisfying the following assumptions: (R) For any s ∈ K, reach(Z(s)) > R, for some R > 0, such that there exists a closed set Ξ(s) ⊇ Z(s) such that K H n (Ξ(s))Q(ds) < ∞ and for some γ > 0 independent of s; for some ξ for some ξ a,K,K (s 1 , y 1 , s 2 , s 3 ) with Remark 18.The above assumption (R) plays here the role of assumption (A1) of Theorem 3; namely, it is known that a lower dimensional set with positive reach is locally the graph of a function of class C 1 (e.g., see [6, p. 204]), and so the rectifiability condition in (A1) is fulfilled.Moreover, the condition reach(Z(s)) > R plays a central role in the proof of the theorem below, where a Steiner type formula is applied.Referring to [10] for a more detailed discussion of the above assumptions, we point out here that (M 2) implies (A2), while (A3) together with (A1) imply (A3).

Theorem 19. [10] Denote by
• Let Θ n be as in (1) with 0 < n < d − 1, satisfying the assumptions (R), (M 2) and (A3); then • Let Θ n be as in (1) with 0 < n = d − 1, satisfying the assumptions (R), (M 2), (A3) and (M 4); then We point out that, in the definition of λ μ,N given in (23), the bandwidth r N is the same for each component of the vector.Therefore it emerges the need of defining a suitable common optimal bandwidth r o,c N .A possible solution might be to take the value which minimizes the usual asymptotic integrated mean square error; actually, since the m points x 1 , . . ., x m ∈ R d are fixed, a more feasible solution is to define as common optimal bandwidth the following quantity:  Proof.The assertion directly follows either by applying Theorem 26 with m = 1, or as a consequence of Corollary 25.
we see, this function coincides with I m (•; λ) in the statement of the proposition: for m = 1 we can easily check that J 1 (y 1 ) = I(y 1 ; λ 1 ); for m ≥ 2 we have J m (y) = sup γ∈R m {γ j y j − λ j (e γj − 1)} = m i=1 I(y j ; λ j )

( 1 )
N (in the limit above) and the sums {S

( 1 )S
N : N ≥ 1} by w N ; in fact Asymptotic results in stochastic geometry 2077 a N : N ≥ 1 satisfies the LDP with speed function v N = 1/a N and good rate function Ĩm (•; λ) defined by Ĩm (y; λ)

2 12λ1 ; for m ≥ 2
we see, this function coincides with Ĩm (•; λ) in the statement of the proposition: for m = 1 we can easily check that J1 (y 1 ) = y

Corollary 16 (Corollary 17 ( 1 √
Moderate deviations).Let Θ n be as in the assumptions of Corollary 13.Then, for any sequence of positive numbers {aN : N ≥ 1} such that lim N →∞ a N = 0 and lim N →∞ w N a N = ∞, the sequence (w N λ μ,N −N p N ) √ w N /a N : N ≥ 1satisfies the LDP with speed function v N = 1/a N and good rate function Ĩm (•; λ Θn ) defined by Ĩm (y; λ Θn ) := Asymptotic Normality).Let Θ n be as in the assumptions of Corollary 13.Then the sequence w N (w N λ μ,N − N p N ) : N ≥ 1 converges weakly (as N → ∞) to the centered m-variate Normal distribution N m (0, Σ).