Multivariate second order Poincar\'e inequalities for Poisson functionals

Given a vector $F=(F_1,\dots,F_m)$ of Poisson functionals $F_1,\dots,F_m$, we investigate the proximity between $F$ and an $m$-dimensional centered Gaussian random vector $N_\Sigma$ with covariance matrix $\Sigma\in\mathbb{R}^{m\times m}$. Apart from finding proximity bounds for the $d_2$- and $d_3$-distances, based on classes of smooth test functions, we obtain proximity bounds for the $d_{convex}$-distance, based on the less tractable test functions comprised of indicators of convex sets. The bounds for all three distances are shown to be of the same order, which is presumably optimal. The bounds are multivariate counterparts of the univariate second order Poincar\'e inequalities and, as such, are expressed in terms of integrated moments of first and second order difference operators. The derived second order Poincar\'e inequalities for indicators of convex sets are made possible by a new bound on the second derivatives of the solution to the Stein equation for the multivariate normal distribution. We present applications to the multivariate normal approximation of first order Poisson integrals and of statistics of Boolean models.


Overview
Roughly speaking, a first order Poincaré inequality for a random variable F measures the closeness of F to its mean. A second order Poincaré inequality [5] measures the closeness of F to a Gaussian random variable, where distance is given by some specified metric on the space of distribution functions. The paper [16] establishes second order Poincaré inequalities for Poisson functionals F , with bounds given in terms of integrated moments of first and second order difference operators, which are an outcome of the research on the Malliavin-Stein method for Poisson functionals in the recent years; see, for example, [7,23,30] and the book [22]. The bounds from [16] can be usefully applied to yield rates of normal convergence for various functionals of Poisson processes, including those represented as a sum of stabilizing score functions [15]. The rates are presumably optimal as they coincide with rates of convergence in the classical central limit theorem.
The goal of this paper is to establish second order Poincaré inequalities for Poisson functionals in the multivariate setting, providing multivariate counterparts to the univariate results of [16]. The proofs combine Malliavin calculus on Poisson spaces with Stein's method of multivariate normal approximation. Optimal rates of normal (1.1) where δ x denotes the Dirac measure of x. We say that F belongs to the domain of the difference operator, i.e., F ∈ dom D, if E F 2 < ∞ and X E (D x F ) 2 λ(dx) < ∞. (1.2) Iterating the definition of the difference operator, one obtains It is natural to investigate the proximity between the distribution of F and that of a standard Gaussian random variable N . To compare two random variables Y and Z or, more precisely, their distributions, one can use the Kolmogorov distance d K (Y, Z) := sup u∈R |P(Y ≤ u) − P(Z ≤ u)|, (1.3) which is the supremum norm of the difference of the distribution functions of Y and Z, or the Wasserstein distance d W (Y, Z) := sup where Lip(1) stands for the set of functions h : R → R with Lipschitz constant at most one. Note that the d K -distance is always defined, while the d W -distance requires finiteness of E |Y | and E |Z|.
When F ∈ dom D, E F = 0, and VarF = 1, the main results of [16] establish the inequalities d W (F, N ) ≤ τ 1 + τ 2 + τ 3 (1.4) and d K (F, N ) ≤ τ 1 + τ 2 + τ 3 + τ 4 + τ 5 + τ 6 , (1.5) where τ 1 , . . . , τ 6 are integrals over moments involving only DF and D 2 F (see Subsection 1.2 in [16] for exact formulas). The proximity bounds (1.4) and (1.5), whose proofs rely on previous Malliavin-Stein bounds in [23] and [7,30], respectively, are second order Poincaré inequalities, as described in [16]. The reason for this name is that the 'first order' Poincaré inequality for F ∈ dom D bounds the variance in terms of the first difference operator, whereas the first and the second difference operator control the closeness to Gaussianity in (1.4) and (1.5). The term second order Poincaré inequality was coined in [5] in a similar Gaussian framework, where one has the first two derivatives instead of the first two difference operators.
For many Poisson functionals F the second order Poincaré inequalities (1.4) and (1.5) may be evaluated since the first two difference operators have a clear interpretation via the operation of adding additional points. This is the advantage of these findings over Malliavin-Stein bounds for normal approximation of Poisson functionals which either require the knowledge of the chaos expansion of F (see, for example, [7,12,23,30]) or which involve bounds expressed in terms of gradient operators and conditional expectations as in [25]. Inequality (1.5) yields rates of normal approximation for some classic problems in stochastic geometry and some non-linear functionals of Poisson-shot-noise processes [16], as well as for functionals of convex hulls of random samples in a smooth convex body, statistics of nearest neighbors graphs, the number of maximal points in a random sample, and estimators of surface area and volume arising in set approximation [15]. The rates of convergence for these examples are presumably optimal.
Often one is not only interested in the behavior of a single Poisson functional but in that of a vector F = (F 1 , . . . , F m ) of Poisson functionals F 1 , . . . , F m with m ∈ N. In this situation, one can compare F with an m-dimensional centered Gaussian random vector N Σ with covariance matrix Σ ∈ R m×m . We are not only interested in the weak convergence of the vector F of Poisson functionals to a limit random vector N Σ , which can be deduced from the univariate case by the Cramer-Wold technique, but in quantitative bounds for the proximity between F and N Σ . In other words, we seek the multivariate counterparts of (1.4) and (1.5).
In this paper F and N Σ are compared with respect to distances based on smooth and non-smooth test functions. One of our main achievements is to show that for each distance, the bounds are of the same, presumably optimal, order. In general, it is more intricate to deal with non-smooth test functions when one uses Stein's method for multivariate normal approximation. For some bounds for smooth test functions having the same order as in the univariate case we refer to [6,Chapter 12] and the references therein. For non-smooth test functions, even obtaining the rate n −1/2 in the classical central limit theorem for sums of n i.i.d. random vectors via Stein's method is challenging [1,11]. The abstract multivariate normal approximation results in terms of the dependence structure in [27] and [6,Chapter 12] and in terms of exchangeable pairs in [26] contain at least additional logarithmic factors compared to what one would expect from the case of smooth test functions or from the univariate case. Recently, these logarithms were removed in [9] and [10] (see also [8]), using the dependence structure and Stein couplings, respectively. However, it seems that none of these findings could be applied to systematically achieve the normal approximation bounds for Poisson functionals given by our main results.

Statement of main results
Let us now give a precise formulation of our results. We start with distances defined in terms of smooth test functions, namely the d 2 -and the d 3 -distances. Let H (2) m be the set of all C 2 -functions h : R m → R such that |h(x) − h(y)| ≤ x − y , x, y ∈ R m , and sup x∈R m where Hess h denotes the Hessian matrix of h and · op stands for the operator norm of a matrix. By H m we denote the class of all C 3 -functions h : R m → R such that the absolute values of the second and third partial derivatives are bounded by one. Using this notation, we define, for m-dimensional random vectors Y and Z, The paper [23] was the first to combine Stein's method and the Malliavin calculus to obtain normal approximation of Poisson functionals. In [24], the univariate main result of [23] for the d W -distance is extended to vectors of Poisson functionals and the d 2 -and the d 3 -distances are considered. Evaluating these multivariate Malliavin-Stein bounds in the same way one evaluates in [16] the univariate bounds from [23] and [7,30] to derive (1.4) and (1.5), one obtains the following multivariate second order Poincaré inequalities.
Let us now compare Theorem 1.1 with related results in the literature. The bounds in [24] are formulated in terms of the difference operator D and the inverse Ornstein-Uhlenbeck generator L −1 and do not, in general, readily lend themselves to off-the-shelf use. In contrast, bounds such as (1.6) and (1.7) involving only difference operators are often tractable, as seen in our applications section and also in the companion paper [32]. Theorem 8.1 of [12] provides a bound on d 3 (F, N Σ ), which relies on the findings of [24], though this bound requires knowledge of the entire Wiener-Itô chaos expansion for each of the components of F and consequently may also be less useful than (1.6). When the components of F belong to a special class of Poisson U -statistics, which admit a finite chaos expansion with explicitly known kernels, the paper [18] uses the results of [24] to establish bounds for the d 3 -distance between F and a Gaussian random vector. In [3], the findings from [24] are generalized by comparing a vector of Poisson functionals with a random vector composed of Gaussian and Poisson random variables.
The paper [14] (1.8) which is again the supremum norm of the difference of the distribution functions of Y and Z. In (1.8) one only takes into account rectangular solids aligned with coordinate planes, so that for a rotation A ∈ R m×m the distance between AY and AZ could be different from the distance between Y and Z. Although convergence in the distance where I m is the set of all indicator functions of measurable convex sets in R m .
x,y F := (D 2 x,y F 1 , . . . , D 2 x,y F m ) for x, y ∈ X, and , where 0 stands for the origin in R m . The following multivariate second order Poincaré inequality for the d convex -distance constitutes our main finding. The inequality is the multivariate counterpart to the bound for the Kolmogorov distance at (1.5) established in [16] and it closely resembles those for the d 2 -and d 3 -distances at (1.6) and (1.7). For a positive definite matrix Σ ∈ R m×m let Σ 1/2 be the positive definite matrix in R m×m such that Σ 1/2 Σ 1/2 = Σ and let Σ −1/2 := (Σ 1/2 ) −1 .
Several existing results for the multivariate normal approximation of general random vectors in the d convex -distance or generalizations of it [6,9,10,27] all require some almost sure boundedness assumptions; in our set-up this would amount to requiring that |D x F i | is almost surely bounded for x ∈ X and i ∈ {1, . . . , m}. One of the main achievements of Theorem 1.2 is that no such assumption is required. For results without almost sure boundedness assumption we refer to [8,Chapter 3] and, with weaker rates of convergence, to [26,Corollary 3.1]. A second main achievement of Theorem 1.2 is that there are no logarithmic terms in the bound (1.9) (see the discussion at the end of Subsection 1.1). The Malliavin-Stein method is used in [20] to establish bounds in the d W -distance for the multivariate normal approximation of functionals of Gaussian processes. In [13], a similar bound with an additional logarithm is derived for the d convex -distance. As with Theorem 1.2, the latter result does not require any boundedness assumptions. Moreover, we expect that one can use our proof technique to remove the logarithm from the result in [13]. For a subclass of functionals of Gaussian processes, namely multiple Wiener-Itô integrals, one may even establish rates of multivariate normal approximation with respect to the total variation distance [21]. This bound also involves additional logarithmic factors and its proof relies on controlling the relative entropy, an approach which differs from Stein's method.
Clearly, if the random vector N Σ is replaced by a normal random vector whose covariance matrix consists of entries Cov(F i , F j ), then the term in the bounds of our main theorems disappears.
In Theorem 1.2 we require that the covariance matrix Σ of the approximating Gaussian random vector N Σ is positive definite. Otherwise, N Σ would be concentrated on some lower-dimensional linear subspace of R m . If now F were to belong to any given lower dimensional subspace of R m with probability zero, then we would have d convex (F, N Σ ) ≥ 1. In such situations, one could have weak convergence without convergence in d convex .

Examples and applications
At first sight, the bounds in our general results appear unwieldy. However for many functionals of interest, we may readily bound the integrated moments of difference operators and the terms γ 1 , . . . , γ 6 simplify. We illustrate this by four examples, which indicate that our bounds yield presumably optimal rates of convergence.
We start with the following analog to the classical central limit theorem for sums of i.i.d. random vectors, where we consider the sum of a Poisson distributed number of i.i.d. random vectors. Here, as in Theorems 1.1 and 1.2, we implicitly assume that the normal approximation bounds all involve finite quantities, as otherwise there is nothing to prove. The proof of the following result is postponed to Subsection 4.1. (a) It is the case that (1.10) Since one can rewrite Z s as a sum of a fixed number of i.i.d. random vectors, one can also apply the classical multivariate central limit theorem. In [1,11,28] corresponding Berry-Esseen inequalities for the d convex -distance are derived, which provide in the case of Corollary 1.3 rates of convergence of the order 1/ √ s as well. These findings are even stronger since they require for the d convex -distance only finite third moments, while we require finite fourth moments. The stricter assumptions in Corollary 1.3 might come from the fact that the proofs of the underlying results for more general Poisson functionals are not optimized for the considered special case. Since Z s is a vector of first order Poisson integrals, Corollary 1.3 follows from a more general theorem in Subsection 4.1, which is obtained by applying our main results to first order Poisson integrals.
As a second example we consider for fixed m ∈ N a family of vectors F s = (F 1,s , . . . , F m,s ), s > 0, of square integrable Poisson functionals F 1,s , . . . , F m,s with underlying Poisson processes η s , s > 0, having intensity measures µ s , s > 0, of the form µ s = sµ with a fixed finite measure µ, e.g., homogenous Poisson processes on the d-dimensional unit cube [0, 1] d with increasing intensity. Moreover, we denote by Σ s the covariance matrix of F s and assume that (Σ s ) s>0 converges to a matrix Σ ∈ R m×m . Under some additional assumptions on the difference operators our main results imply the following result, proved in Subsection 4.3. Corollary 1.4. Let F s , s > 0, be as above and assume that Σ is positive definite and that there are constants a, b, ε ∈ (0, ∞) such that, for i ∈ {1, . . . , m} and s > 0, 12) and s X P(D 2 x,y F i,s = 0) ε 36+6ε µ(dy) ≤ b, µ-a.e. x ∈ X. (1.13) Then there exist constants s 0 , C 3 , C 2 , C convex ∈ (0, ∞) depending on a, b, ε, m, µ(X), Σ, and (Σ s ) s>0 such that The rates of convergence in Corollary 1.4 are of the order s −1/2 for all distances.
The set-up of Corollary 1.4, in which one re-scales by the square root of the intensity parameter and in which the (6 + ε)-th moments of the un-rescaled difference operators are bounded, frequently occurs in problems in stochastic geometry; see e.g. [15,16].
The third example is the situation where, before centering, the components of F . . , m}, are bounded subsets of R d , η sg is a Poisson process in R d whose intensity measure has density sg with respect to the Lebesgue measure, and where ξ (i) s , i ∈ {1, . . . , m}, are stabilizing score functions. Then the companion paper [32], which can be seen as a multivariate counterpart to some of the findings in [15], shows that the right-hand sides of (1.6), (1.7), and (1. . . , m}, and g. This means that the approximation error consists of a term taking into account the difference of the covariances and a term of order s −1/2 , which also occurs in the univariate case (see [15]). In Section 3 of [32], these findings are applied to obtain quantitative multivariate central limit theorems for statistics of k-nearest neighbors graphs and random geometric graphs as well as for statistics arising in topological data analysis and entropy estimation.
A fourth example concerns the intrinsic volumes of Boolean models, a prominent problem from stochastic geometry. Let V d (W ) be the volume of the compact convex observation window W ⊂ R d . If one compares the vector of intrinsic volumes of the Boolean model in W with a centered Gaussian random vector having exactly the same covariance matrix and if one increases the inradius of W , then our main results lead to the rate of normal convergence V d (W ) −1/2 ; see Subsection 4.2.
In the last three examples the rates of convergence s −1/2 and V d (W ) −1/2 , respectively, are comparable to n −1/2 in the uni-and multivariate central limit theorems for the i.i.d. case and, thus, presumably optimal.
Among these examples, we will consider the first order Poisson integrals generalizing the situation of Corollary 1.3 and the intrinsic volumes of Boolean models in more detail in Subsections 4.1 and 4.2, while Corollary 1.4 is a consequence of a theorem derived in Subsection 4.3.

Proof techniques
Let us now informally comment on the method of proof. The proofs of Theorems 1.1 and 1.2 are based on the Malliavin calculus on the Poisson space and Stein's method for multivariate normal approximation. In particular we apply a smoothing technique, which we discuss in this subsection. Assume we aim to compare an m-dimensional random vector Y = (Y 1 , . . . , Y m ) with an m-dimensional centered Gaussian random vector N I with the identity matrix I ∈ R m×m as covariance matrix (we assume Σ = I for simplicity) in terms of a measurable test function h : R m → R. The idea of Stein's method for multivariate normal approximation (see e.g. [6,11]) is now to use the identity (1.14) Under some smoothness assumptions on h one can give formulas for f h (see, for example, Lemma 2.6 in [6]). However for non-smooth h such as indicator functions of convex sets, it appears unclear how to deal with f h . This problem is resolved by considering instead of h some smoothed C ∞ version h t,I of h, which depends on a smoothing parameter t ∈ (0, 1). Of course one makes some error by replacing the test functions defining the d convex -distance by their smoothed versions, but a smoothing lemma allows us to bound this error by some constant multiple of √ t. Thus it remains to find upper bounds for |E h t,I (Y ) − E h t,I (N I )| as a function of t ∈ (0, 1). We sketch how this goes as follows. Given h : R m → R measurable and bounded and t ∈ (0, 1) we introduce the smoothed function where ϕ I denotes the density of N I . The function f t,h,I : R m → R given by where D x is the difference operator given in (1.1) and L −1 is the inverse Ornstein-Uhlenbeck generator defined in the Appendix. A main idea behind the proof of Theorem 1.2 is to show that the bound for the right-hand side of the above involves and then to use for all t ∈ (0, 1) and i, j ∈ {1, . . . , m} where M 2 ≤ m 2 . By choosing t appropriately we may deduce Theorem 1.2. The inequality (1.17) is not restricted to a vector F of Poisson functionals, but holds for arbitrary random vectors Y in R m , as described in Proposition 2.3. Thus, we expect that it might be helpful for other applications of Stein's method for multivariate normal approximation. In our main results we provide explicit constants, which are sometimes very large. In part, this is caused by some generous estimates in our proofs, used to obtain relatively short bounds valid for all choices of m and to simplify the proofs. We expect that one could obtain better constants for many instances if one goes back to our proofs and uses the particular stucture of the functionals and the choice of m.

Structure of the paper
This paper is organized as follows. The next section provides a smoothing lemma and bounds on solutions of the multivariate Stein equation, including the afore-mentioned Proposition 2.3. Section 3, which draws on the auxiliary results of Section 2, is devoted to the proofs of our main results. Section 4 deals with the application of our findings to first order Poisson integrals and intrinsic volumes of Boolean models. Moreover, we further evaluate our results for the case of marked Poisson processes -a result which will be used in the companion paper [32]. In the Appendix we recall the definitions of the Malliavin operators as well as some results from Malliavin calculus on the Poisson space that are used in Section 3.
Given measurable and bounded h : For an m-dimensional random vector Y , t ∈ (0, 1), and positive definite Σ ∈ R m×m we have Proof. We first establish that the asserted bound holds when Σ is replaced by I. Indeed this is the statement of [11, Lemma 2.11] with ε = √ t, ∆ = 2 √ m (see [11, p. 725] as well as [2, Corollary 3.2]) and a m ≤ 2 √ 2m (which follows from Markov's inequality) there.
Next, to show that this bound holds for positive definite Σ ∈ R m×m , it suffices to notice that we have To verify the second identity, notice that for any h ∈ I m the

Bounds on the derivatives of the solution to Stein's equation for multivariate normal approximation
We extend the definition of f t,h,I at (1.16) to include indices with general covariance matrix Σ. This goes as follows. For h : R m → R measurable and bounded, Σ = (σ ij ) i,j∈{1,...,m} ∈ R m×m positive definite, and t ∈ (0, 1), the function f t,h,Σ : R m → R given by and (2.2) which yields together with a short computation m i,j,k=1 From the above formulas for the derivatives of f t,h,Σ one can deduce that Sup norm bounds on the derivatives of f t,h,Σ go hand-in-hand with the following more useful second moment bound. It is a key to controlling the right-hand side of the smoothing inequality in Lemma 2.2, an essential part of the proof of Theorem 1.2.
Let Y be an m-dimensional random vector, let Σ ∈ R m×m be positive definite, and define for all t ∈ (0, 1).
We prepare the proof of Proposition 2.3 with the following lemmas.
Proof. For any convex A ⊆ R m we have that where we used Lemma 2.1 for the last inequality.
Lemma 2.5. For any positive definite Σ ∈ R m×m and i, j ∈ {1, . . . , m}, Proof. As noted at display (12.72) of [6] we have that the integral of the mixed derivative is one, so the derivative vanishes.
Lemma 2.6. For all h ∈ I m and t ∈ (0, 1), Proof. Put h := 1{· ∈ A} for some measurable convex set A ⊆ R m . Then, for i, j ∈ {1, . . . , m} and y ∈ R m , it follows from (2.1) that For s ∈ (0, 1) and y ∈ R m let r s,y : where B m (x, r) denotes the closed ball with center x ∈ R m and radius r ≥ 0.
Letting φ be the density of a standard Gaussian random variable, we have, for all a ∈ R, We obtain where I i,j is the identity matrix I where the i-th and the j-th diagonal element are replaced by 2. Consequently, we have The Markov inequality yields P( N Ii,j ≥ r s,y ) ≤ E N Ii,j Hence, we obtain The Cauchy-Schwarz inequality leads to is the indicator function of a measurable convex set, whence Now the special case proven above (for Σ = I) and the observation that d convex (Σ −1/2 Y, N I ) = d convex (Y, N Σ ) complete the proof of Proposition 2.3.

Proofs of the main results
Throughout this section we assume that the reader is familiar with Malliavin calculus on the Poisson space. The Appendix provides the essential definitions and properties of Malliavin operators needed in the sequel.

Proof of Theorem 1.1
The starting point for the proofs for the d 3 -and the d 2 -distance are the following quanti-  [16,24] or the Appendix. Then If, additionally, Σ is positive definite, then The main difficulty in evaluating these bounds is to control the behavior of the terms involving L −1 , which will be done in the same way as in [16].
Combining Proposition 3.1 and Proposition 3.2 yields the proof of Theorem 1.1, which goes as follows.

It follows from Hölder's inequality and Proposition 3.2(a) that
Now Proposition 3.1 completes the proof of Theorem 1.1.

Proof of Theorem 1.2
Throughout this subsection we use several Malliavin operators, namely the already introduced difference operator D, the inverse Ornstein-Uhlenbeck generator L −1 , and the Skorohod integral δ. Recall that we denote the domain of D by dom D and we define dom δ similarly. For definitions we refer to the Appendix.
We prepare the proof of Theorem 1.2 by the following lemma.

Lemma 3.3.
For an m-dimensional random vector Y , a measurable convex set A ⊆ R m , a positive definite matrix Σ ∈ R m×m , and w ≥ 0, Proof. Using the abbreviations A w := {y ∈ R m : d(y, A) ≤ w} and A −w := {y ∈ A : d(y, ∂A) > w}, we obtain Since A w and A −w are measurable and convex, we have Together with Lemma 2.1, we see that which completes the proof.
The next proposition is an abstract formulation of one of the main ideas of the proof of Proposition 3.2(b) (see also [17,Lemma 21.4
For the definition of the operator P s we refer to [ p = q = 2, but by using Hölder's inequality instead of the Cauchy-Schwarz inequality in the last two steps of its proof, one can extend it to p, q ∈ (0, ∞) with 1/p + 1/q = 1.
Proof of Theorem 1.2. In the following five-part proof we may assume that γ 1 , ..., γ 6 < ∞ since otherwise there is nothing to prove. Throughout let h : R m → R be the indicator function of a measurable convex set K ⊆ R m .
The idea of the proof goes as follows. Put We first establish the bound where J 1 , J 2,1 , and J 2,2 are given below. We then show that the three terms on the right hand side of (3.3) are each bounded by products of powers of γ and factors such as 1/ √ t, | log t| d convex (F, N Σ ), or 1/ √ t · d convex (F, N Σ ), and then choose t appropriately. PutJ The fundamental theorem of calculus yields Further applications of the fundamental theorem of calculus lead to (3.6) Part (iii): A bound for J 2,1 . We start by rewriting J 2,1 as All third partial derivatives of f t,h,Σ are bounded by some constant (recall (2.5)), and thus ∂ 2 f t,h,Σ ∂y j ∂y k (F ) ∈ dom D, j, k ∈ {1, . . . , m}.
From Lemma A.4 and the computation for E δ(DF j (−DL −1 F k )) 2 below, one deduces that DF j (−DL −1 F k ) ∈ dom δ. It follows from integration by parts (see Lemma A.3) and the Cauchy-Schwarz inequality that

By Proposition 2.3 the first factor is bounded by
For the summands in the second factor it follows from Lemma A.4 that where we used the arithmetic geometric mean inequality a 1 a 2 ≤ 1 2 (a 2 1 + a 2 2 ) for a 1 , a 2 ∈ (0, ∞) as well as Lemma A.1 and Jensen's inequality. It follows from Proposition 3.2(a) and the Cauchy-Schwarz inequality that , y)).
Since γ 4 < ∞, the right-hand side is finite, which implies that assumptions (A.2) and  Part (iv): A bound for J 2,2 . The bound for |J 2,2 | is more involved and goes as follows.
First, note that the triangle inequality and (2.2) imply that Using the abbreviation for i, j, k ∈ {1, . . . , m} and the Cauchy-Schwarz inequality, we obtain

By (2.4) and substitution the first integral satisfies the bound
The Cauchy-Schwarz inequality yields that Together with the observation that, for a standard univariate Gaussian random variable N with density φ, Next we bound U ijk for fixed i, j, k ∈ {1, . . . , m}. We define r(D x F ) := 1 DxF D x F . Using the substitution w = v D x F for the first term, we obtain ijk .
Recall that h(·) = 1{· ∈ K} for a measurable convex set K ⊆ R m . We have that where we used the arithmetic geometric mean inequality and Proposition 3.2(a) in the last step. This implies We shall do this with the aid of Proposition 3.4 and the Poincaré inequality. By way of preparation, define K s,z := 1 Thus, we have that jk .

Now Lemma 3.3, the arithmetic geometric mean inequality, and Proposition 3.2(a) imply that
Consequently, we have that jk we obtain by the Cauchy-Schwarz inequality and Lemma 3.3 that The existence of the variances in the definitions of V (1) jk and V (2) jk will be discussed below.
To further bound V (1) jk and V (2) jk we will apply the Poincaré inequality (see Theorem A.2). We prepare this by computing difference operators. We have that Together with Lemma A.1, we obtain x 2 )). x 2 , y)).

Now it follows from the Poincaré inequality that
This implies that Now it follows from Proposition 3.4 with p = 3/2 and q = 3 that (x 1 , x 2 , y))

y)).
A short computation using Hölder's inequality shows that m j,k=1 (3.13) By the Poincaré inequality (see Theorem A.2), we have that Here, the first term is bounded because F j , F k ∈ dom D. Using (3.12) and Proposition 3.4 in a similar way as above, one obtains that the second term can be bounded by  (3.14) Part (v): Putting the pieces together and choosing t. Finally, we may evaluate the righthand side of (3.3). Recalling the definition of γ at (3.2), we may simplify (3.6), (3.7), and (3.14) to Together with (2.6) and (3.9) we obtain which completes the proof.

Multivariate normal approximation of first order Poisson integrals
In this subsection we apply our main results to first order Poisson integrals with respect to the Poisson process η (as considered before). For f ∈ L 1 (λ) ∩ L 2 (λ) we define I 1 (f ) to be the Poisson integral of f (also called the Wiener-Itô integral of f in [17]), namely If η is a proper Poisson process, i.e., it has almost surely a representation η = i∈I δ Xi with a countable collection (X i ) i∈I of random elements of X, this can be rewritten as Using approximation arguments in L 2 (P), one can extend the above definition to integrands f ∈ L 2 (λ). Note that, for all f, g ∈ L 2 (λ), E I 1 (f ) = 0 and E I 1 (f )I 1 (g) = X f (x)g(x) λ(dx). . . , f m ∈ L 2 (λ) and m ∈ N and let Σ = (σ ij ) i,j∈{1,...,m} ∈ R m×m be positive semi-definite.

Multivariate central limit theorems for intrinsic volumes of Boolean models
In the following, we derive quantitative multivariate central limit theorems for Boolean models, extending previous findings in [12] and [17,Chapter 22]. Our proofs rely on the general bounds from Subsection 1.2 as well as arguments from [12] and [17,Chapter 22].
We denote by K d the set of compact convex sets in R d , d ≥ 1. For a probability measure Q on K d such that Q({∅}) = 0 and γ > 0 let η be a Poisson process on R d × K d with intensity measure γλ d ⊗ Q, where λ d is Lebesgue measure on R d . Note that η is a stationary Poisson process in R d with independent marks in K d distributed according to Q. A random compact convex set Z 0 distributed according to Q is called the typical grain. From η we construct the random closed set which is called the Boolean model. For more details on Boolean models and further references we refer to [29].
By the convex ring R d we mean the set of all finite unions of elements from K d . Let V 0 , V 1 , . . . , V d : R d → R be the intrinsic volumes (see, for example, [29,Section 14.2] for a definition via the Steiner formula and additive extensions). In particular, for A ∈ R d , V d (A) is the volume of A, V d−1 (A) is half the surface area of A (if A is the closure of its interior), and V 0 (A) is the Euler characteristic of A.
In the sequel we study the intersection of the Boolean model Z with a compact convex observation window W ∈ K d . Note that Z ∩ W almost surely belongs to R d if E V i (Z 0 ) < ∞ for i ∈ {1, . . . , d}. Questions of interest include finding the fraction of W covered by Z and the surface area of Z ∩ W . We address both problems simultaneously by considering Denote by r(K) the inradius of K ∈ K d . In [12,Theorem 3.1] it is shown that there exists a matrix Σ = (σ i,j ) i,j∈{0,...,d} ∈ R (d+1)×(d+1) such that [12,Theorem 4.1]). We describe the asymptotic behavior of V(Z ∩ W ) as r(W ) → ∞ with respect to d 3 , d 2 , and d convex .
, . . . , d} and P(V d (Z 0 ) > 0) > 0, there exists a constant C 2 ∈ (0, ∞) depending on d, γ, and Q such that for all W ∈ K d with r(W ) ≥ 1.  the non-smooth d convex -distance. The findings of [12] as well as the univariate results in [17] consider so-called geometric functionals, which include intrinsic volumes. Theorem 4.2 could be also generalized to these functionals, but for the sake of simplicity we consider only intrinsic volumes. Since our proof of Theorem 4.2 is based on second order Poincaré inequalities, it does not require dealing with the whole chaos expansion as in [12]. For previous results on volume and surface area of Boolean models we refer the reader to [12]. Theorem 4.2 indicates that the slow convergence of Σ(W ) to W weakens the rate of convergence for d ≥ 3 (see also [12,Remark 9.5]). The rate of convergence 1/ V d (W ) for the distance to N Σ(W ) is comparable to 1/ √ n in the classical central limit theorem for sums of n i.i.d. random vectors and, thus, presumably optimal.
We prepare the proof of Theorem 4.2 by two lemmas. In the sequel, we use the We write the difference operator D with respect to the pair (x, K), with x ∈ R d , K ∈ K d . Lemma 4.3. There exists a constant C ∈ (0, ∞) only depending on d, γ, and Q such that, for x, x 1 , x 2 ∈ R d , K, K 1 , K 2 ∈ K d , i, j ∈ {0, . . . , d}, and m, m 1 , m 2 ∈ {1, . . . , 6}, Proof. For m ∈ {2, 3} or i = j and m 1 = m 2 = 2 this is shown in [17] Proof of Theorem 4.2. We deduce Theorem 4.2 from Theorem 1.1 and Theorem 1.2 by bounding γ 1 , . . . , γ 6 from Subsection 1.2 as follows. We denote byγ 1 , . . . ,γ 6 the corresponding terms without the normalization 1/ V d (W ) of the functionals. Without loss of generality we can assume that γ = 1. In the sequel let (Z n ) n∈N be independent copies of the typical grain Z 0 . It follows from Lemma 4.3, the monotonicity and the translation invariance of the Wills functional (i.e., V (K) ≤ V (L) for K, L ∈ K d with K ⊆ L and V (K + x) = V (K) for K ∈ K d and x ∈ R d ), and Lemma 4.4 that Hence, we see that γ 1 and γ 2 are at most of the order V (W )/V d (W ). From the same arguments as above we obtain that, for k ∈ N, so that together with (4.2), we deduce that γ 4 is at most of order V (W )/V d (W ).
Combining the previous estimates with Lemma 4.3 yields Monotonicity and translation invariance of the Wills functional and Lemma 4.4 implỹ Thus, γ 5 and γ 6 are at most of the orders V (W ) 1/3 /V d (W ) 5/6 and V (W ) 1/4 /V d (W ) 3/4 , respectively. By [12,Lemma 3.7], there exists a dimension dependent constant C d ∈ (0, ∞) such that V (W ) V d (W ) ≤ C d for all W ∈ K d with r(W ) ≥ 1.

Multivariate normal approximation for functionals of marked Poisson processes
In this subsection we establish a consequence of Theorem 1.1 and Theorem 1.2, which can be seen as a multivariate version of Proposition 1.4 and Theorem 6.1 in [16]. This result will be used heavily in the companion paper [32], in order to deduce rates of normal approximation for Poisson functionals which may be expressed as sums of stabilizing score functions. We work in the context of marked Poisson processes, where (M, F M , λ M ) denotes the probability space of marks. Let X := X × M, put F to be the product σ-field of F and F M , and let λ be the product measure of λ and λ M . Here, (X, F , λ) is as before. For a given point x ∈ X we denote by M x the corresponding random mark, which has distribution λ M and which is independent of everything else.
The following lemma (see [16,Proposition 2.3 and Corollary 2.4]) provides a criterion for g belonging to dom δ and an upper bound for the second moment of δ(g). Lemma A.4. Let g be a random function depending only on η and satisfying (A.2) and E X 2 (D y g(x)) 2 λ 2 (d(x, y)) < ∞. Then, g ∈ dom D and E δ(g) 2 ≤ E X g(x) 2 λ(dx) + E X 2 (D y g(x)) 2 λ 2 (d(x, y)).