On Stein's Method for Multivariate Self-Decomposable Laws

This work explores and develops elements of Stein's method of approximation, in the infinitely divisible setting, and its connections to functional analysis. It is mainly concerned with multivariate self-decomposable laws without finite first moment and, in particular, with $\alpha$-stable ones, $\alpha \in (0,1]$. At first, several characterizations of these laws via covariance identities are presented. In turn, these characterizations lead to integro-differential equations which are solved with the help of both semigroup and Fourier methodologies. Then, Poincar\'e-type inequalities for self-decomposable laws having finite first moment are revisited. In this non-local setting, several algebraic quantities (such as the carr\'e du champs and its iterates) originating in the theory of Markov diffusion operators are computed. Finally, rigidity and stability results for the Poincar\'e-ratio functional of the rotationally invariant $\alpha$-stable laws, $\alpha\in (1,2)$, are obtained; and as such they recover the classical Gaussian setting as $\alpha \to 2$.


Introduction
The present notes form a sequel to the works [1,2] where Stein's method for general univariate and multivariate infinitely divisible laws with finite first moment has been initiated. Introduced in [53,54], Stein's method is a collection of methods allowing to control the discrepancy, in a suitable metric, between probability measures and to provide quantitative rates of convergence in weak limit theorems. Originally developed for the Gaussian and the Poisson laws ( [16]), several nonequivalent investigations have focused on extensions and generalizations of Stein's method outside the classical univariate Gaussian and Poisson settings. In this regard, let us cite [7,10,37,43,27,35,38,25,56] and [8,31,29,48,45,15,47,39,40,50,30,2] for univariate and multivariate extensions and generalizations. Moreover, for good introductions to the method, let us refer the reader to the standard references and surveys [24,9,51,18,14]. In all the works just cited, the target probability distribution admits, at the very least, a finite first moment. The very recent [20], where the case of the univariate α-stable distributions, α ∈ (0, 1], is studied, seems to be the only instance of the method bypassing the finite first moment assumption. Below, we develop a Stein's method framework for non-degenerate multivariate self-decomposable distributions without Gaussian component. Let us recall that self-decomposable distributions form a subclass of infinitely divisible distributions. Moreover, they are weak limits of normalized sums of independent summands and, as such, they naturally generalize the Gaussian/stable distributions. Originally introduced by Paul Lévy in [34], self-decomposable distributions and their properties have been studied in depth by many authors (see, e.g., [36,52]).
The methodology developed here for multivariate self-decomposable distributions relies on a specific semigroup of operators already put forward in our previous analyses [1,2]. The generator of this semigroup is an integro-differential operator whose non-local part depends in a subtle way on the Lévy measure of the target self-decomposable distribution (see Lemma 4.1). Indeed, the non-local part of this operator differs from the one obtained in [1,2] since the Fourier symbols of the associated semigroup of operators do not exhibit C 1 -smoothness. However, by exploiting the polar decomposition of the Lévy measure of the target self-decomposable distribution together with the monotonicity of the associated k-function, C 1 -regularity is reached and as such natural candidates for the corresponding Stein equation and its solution are put forward. Moreover, this equation reflects the Lévy-Khintchine representation used to express the characteristic function of the target self-decomposable distribution. This naturally induces three types of equations reminiscent of the following classical distinction between stable laws: α ∈ (0, 1), α = 1 and α ∈ (1, 2).
With these new findings, we revisit Poincaré-type inequalities for self-decomposable distributions with finite first moment. Initially obtained in [17] (see also [32]), these Poincaré-type inequalities reflect the infinite divisibility of the reference measure (without Gaussian component) and as such put into play a non-local Dirichlet form contrasting with the standard local Dirichlet form associated with the Gaussian measures. Our new proof of these Poincaré-type inequalities is based on the semigroup of operators already put forward (and used to solve the Stein equation) in [2] and is in line with the proof of the Gaussian Poincaré inequality based on the differentiation of the variance along the Ornstein-Uhlenbeck semigroup (see, e.g., [6]). Moreover, in this non-local setting, we compute several algebraic quantities (such as the carré du champs and its iterates) originating in Markov diffusion operators theory in order to reach rigidity and stability results for the Poincaré-ratio (U -) functional defined in (70) and associated with the rotationally invariant α-stable distributions. Rigidity results for infinitely divisible distributions with finite second mo-ment were obtained in [19,Theorem 2.1] whereas the corresponding stability results were obtained in [2,Theorem 4.5] through Stein's method and variational techniques inspired by [22]. Here, for the rotationally invariant α-stable distribution, α ∈ (1, 2), we revisit the method of [22] using the framework of Dirichlet forms. Coupled with a truncation procedure, rigidity and stability of the Poincaré U -functional are stated in Corollary 5.1, Corollary 5.2 and Theorem 5.3. This truncation procedure allows us to build an optimizing sequence for the U -functional. This sequence of functions can be spectrally interpreted as a singular sequence verifying a Weyl-type condition associated with the corresponding Poincaré constant (see Conditions (71) and (86) below).
Let us further describe the content of our notes: The next section introduces notations and definitions used throughout this work and prove a characterization theorem for multivariate infinitely divisible distributions with finite first moment. In Section 3, using the previous characterization and truncation arguments, we obtain several characterization theorems for self-decomposable laws lacking finite first moment. These results highlight the role of the Lévy-Khintchine representation of the characteristic function of the target self-decomposable distribution and apply, in particular, to multivariate stable laws with stability index in (0, 1]. In Section 4, a Stein equation for nondegenerate multivariate self-decomposable distributions without finite first moment is at first put forward. It is then solved, under a low moment condition, via a combination of semigroup techniques and Fourier analysis. In the last section, Poincaré-type inequalities for self-decomposable laws with finite first moment are looked a new. Several algebraic quantities originating in Markov diffusion operators theory are computed in this non-local setting. In particular, for the rotationally invariant α-stable laws with α ∈ (1, 2), a Bakry-Émery criterion is shown to hold, recovering as α → 2, the classical Gaussian theory involving the carré du champs and its iterates. Finally, rigidity and stability results for the Poincaré U -functional of the rotationally invariant α-stable distributions, α ∈ (1, 2), are obtained using elements of spectral analysis and Dirichlet form theory. A technical appendix finishes our manuscript.

Notations and Preliminaries
Throughout, let · and ·; · be respectively the Euclidean norm and inner product on R d , d ≥ 1. Let also S(R d ) be the Schwartz space of infinitely differentiable rapidly decreasing real-valued functions defined on R d , and finally let F be the Fourier transform operator given, for f ∈ S(R d ), by On S(R d ), the Fourier transform is an isomorphism and the following inversion formula is well known Next, C b (R d ) is the space of bounded continuous functions on R d endowed with the uniform norm For µ a probability measure on R d and for 1 ≤ p < +∞, L p (µ) is the Banach space of equivalence classes of functions defined µ-a.e.on R d such that f p L p (µ) = R d |f (x)| p µ(dx) < +∞, f ∈ L p (µ). Similarly, L ∞ (µ) is the space of equivalence classes of functions bounded everywhere and µ-measurable. For any bounded linear operator, T , from a Banach space (X , · X ) to another Banach space (Y, · Y ) the operator norm is, as usual, More generally, for any r-multilinear form F from (R d ) r , r ≥ 1, to R, the operator norm of F is F op := sup |F (v 1 , ..., v r )| : v j ∈ R d , v j = 1, j = 1, ..., r .
Through the whole text, a Lévy measure is a positive Borel measure on R d such that ν({0}) = 0 and R d (1 ∧ u 2 )ν(du) < +∞. An R d -valued random vector X is infinitely divisible with triplet (b, Σ, ν) (written X ∼ ID(b, Σ, ν)), if its characteristic function ϕ writes, for all ξ ∈ R d , as with b ∈ R d , Σ a symmetric positive semi-definite d × d matrix, ν a Lévy measure on R d and D the closed Euclidean unit ball of R d . The representation (3) is mainly the one to be used, from start to finish, with the (unique) generating triplet (b, Σ, ν). However, other types of representations are also possible and two of them are presented next. First, if ν is such that u ≤1 u ν(du) < +∞, then (3) becomes where b 0 = b − u ≤1 uν(du) is called the drift of X. This representation is cryptically expressed as X ∼ ID(b 0 , Σ, ν) 0 . Second, if ν is such that u >1 u ν(du) < +∞, then (3) becomes where b 1 = b + u >1 uν(du) is called the center of X. In turn, this last representation is now cryptically written as X ∼ ID(b 1 , Σ, ν) 1 . In fact, b 1 = EX as, for any p > 0, E X p < +∞ is equivalent to u >1 u p ν(du) < +∞. Also, for any r > 0, Ee r X < +∞ is equivalent to u >1 e r u ν(du) < +∞.
In the sequel, we are also interested in some distinct classes of infinitely divisible distributions, namely the stable ones and the self-decomposable ones. Recall that an ID random vector X is α-stable, 0 < α < 2, if b ∈ R d , if Σ = 0 and if its Lévy measure ν admits the following polar decomposition where σ is a finite positive measure on S d−1 , the Euclidean unit sphere of R d . When α ∈ (0, 1), then u ≤1 |u j |ν(du) < +∞, for all 1 ≤ j ≤ d, ϕ and so with, again, b 0 = b − u ≤1 uν(du). Now, recall that an ID random vector X is self-decomposable (SD) if b ∈ R d , if Σ = 0, and if its Lévy measure ν admits the polar decomposition where σ is a positive finite measure on S d−1 and where k x (r) is a function which is nonnegative, decreasing in r, (k x (r 1 ) ≤ k x (r 2 ), for 0 < r 2 ≤ r 1 ) and measurable in x. In the sequel, without loss of generality, k x (r) is assumed to be right-continuous in r ∈ (0, +∞), to admit a left-limit at each r ∈ (0, +∞) and +∞ 0 (1 ∧ r 2 )k x (r)dr/r is independent of x. Next, (see, e.g., [44,Chapter 12]) let us denote by V b a (g) the variation of a function g over the interval [a, b] (0, +∞), where the supremum is taken over all subdivisions Since k x (r) is of bounded variation in r on any (a, b) (0, +∞), a > 0, b > 0 and a ≤ b, and right-continuous in r ∈ (0, +∞), the following integration by parts formula holds true for all f continuously differentiable on (a, b) such that lim Let us now introduce some natural distances between probability measures on R d . Let N d be the space of multi-indices of dimension d. For any α ∈ N d , |α| = d i=1 |α i | and D α denote the partial derivatives operators defined on smooth enough functions f , by D α (f )(x 1 , ..., Moreover, for any r-times continuously differentiable function, h, on R d , viewing its ℓth-derivative D ℓ (h) as a ℓ-multilinear form, for 1 ≤ ℓ ≤ r, let For r ≥ 0, H r is the space of bounded continuous functions defined on R d which are continuously differentiable up to (and including) the order r and such that, for any such function f , with M 0 (f ) := sup x∈R d |f (x)|. Then, the smooth Wasserstein distance of order r, between two random vectors X and Y having respective laws µ X and µ Y , is defined by Moreover, for r ≥ 1, d Wr admits the following representation (see [2,Lemma A.2.]) where C ∞ c (R d ) is the space of infinitely differentiable compactly supported functions on R d . In particular, for r ≥ 1, As usual, for two probability measures, µ 1 and µ 2 , on R d , µ 1 is said to be absolutely continuous with respect to µ 2 , denoted by µ 1 << µ 2 , if for any Borel set, B, such that µ 2 (B) = 0, it follows that µ 1 (B) = 0.
To end this section, let us state the following characterization result of ID random vectors with finite first moment, valid, for example, for stable random vector with stability index α ∈ (1, 2) has its origin in the univariate result [1, Theorem 3.1].
Theorem 2.1. Let X be a random vector such that E|X i | < +∞, for all i ∈ {1, . . . , d}. Let ν be a Lévy measure on R d such that u ≥1 u ν(du) < +∞. Then, for all f bounded Lipschitz function on R d , if and only if X is an ID random vector with Lévy measure ν (and b = EX − u >1 uν(du)).
Proof. Let us assume that X is an ID random vector with finite first moment and with Lévy measure ν. Then, from [32,Proposition 2], for all f and g bounded Lipschitz functions on R d , where (X z , Y z ) is an ID random vector in R 2d defined through an interpolation scheme as in [32,Equation (2.7)]. Now, since X has finite first moment, one can take for g the function g t (x) = t; x , for all x ∈ R d and for some t ∈ R d . Then, by linearity since X z = d X, where = d stands for equality in distribution. This concludes the direct part of the proof. Conversely, let us assume that Then, for all ξ ∈ R d and all t ∈ R, where the equality is understood to be in R d . In particular, one has Denoting by Φ t the function defined by Φ t (ξ) = E e it X;ξ , for all ξ ∈ R d , the previous equality boils down to Moreover, one notes that Φ 0 (ξ) = 1. Then, for all ξ ∈ R d and all t ∈ R, Taking t = 1, the characteristic function of X is then given, for all ξ ∈ R d , by namely, X is ID with Levy measure ν (and b = EX − u >1 uν(du)). , whenever Σ ε , is non-singular for every ε ∈ (0, 1], the following two conditions are equivalent: (a) As ε → 0 + ,X ε = Σ −1/2 ε X ε converges in distribution to a centered multivariate Gaussian random vector with identity covariance matrix.
for c α,d given by Clearly, as α → 2 − , X α converges in distribution to a centered Gaussian random vector Z with identity covariance matrix. Next, by Theorem 2.1, for all f bounded Lipschitz function on R d , and observe, at first, that for all f ∈ S(R d ), , and observe now that the Fourier symbol, σ α , of this operator satisfies, for all ξ ∈ R d , Finally, for all f ∈ S(R d ) so that the characterizing identity (21) is preserved when passing to the limit, converging, again, for all f ∈ S(R d ), to

Characterizations of Self-Decomposable Laws
In this section, we provide various characterization results, for stable distributions and some selfdecomposable ones, not covered by Theorem 2.1. However, the direct parts of these results are simple consequences of Theorem 2.1 together with truncation and discretization arguments. The stable results recover, in particular, the one-dimensional results independently obtained in [20]. Below, and throughout, we will make use of the transformation T c applied to positive (Lévy) measures and defined for all c > 0 and all Borel sets, B, of R d by Theorem 3.1. Let X be a random vector in R d . Let b ∈ R d , α ∈ (0, 1) and let ν be a Lévy measure such that, for all c > 0, Then, , for all f ∈ S(R d ) if and only if X is a stable random vector with parameter b, stability index α and Lévy measure ν.
Proof. Let us first assume that X is a stable random vector in R d with parameters b ∈ R d , stability index α ∈ (0, 1) and Lévy measure ν. Then, [52,Theorem 14.3,(ii)], ν is given by where σ is a finite positive measure on the Euclidean unit sphere of R d , and and, let X R be the ID random vector defined through its characteristic function by Note, in particular, that X R is such that E X R < +∞. Then, by Theorem 2.1, for all g ∈ S(R d ), Now, choosing g = ∂ i (f ) for some f ∈ S(R d ) and for i ∈ {1, . . . , d}, it follows that To continue, project the vectorial equality (23) onto the direction e i = (0, . . . , 0, 1, 0, . . . , 0), to get, for all i ∈ {1, . . . , d}, where X R,i and b 0,i are the i-th coordinates of X R and of b 0 respectively. Adding-up these last identities, for i ∈ {1, . . . , d}, leads to Now, observe that X R converges in distribution towards X since by the Lebesgue dominated convergence theorem, Moreover, from the polar decomposition of the Lévy measure ν R , Next, for all z ∈ R d Set H z (r) = S d−1 f (z + rx)σ(dx), for all r > 0 and all z ∈ R d . Moreover, for all r > 0 Thus, A standard integration by parts argument, combined with α ∈ (0, 1), implies that Next, integrating with respect to the law of X R , one gets that Again, since α ∈ (0, 1), f ∈ S(R d ) and σ S d−1 < +∞, Finally, to conclude the direct implication, one needs to prove that To this end, for all R > 1 and all Since α ∈ (0, 1) and f ∈ S(R d ), it is clear that both functions are well-defined, bounded and continuous on R d . Moreover, for all R > 1 and all z ∈ R d Thus, F R converges uniformly on R d towards F . Finally, since X R converges in distribution to X, which concludes the first part of the proof. To prove the converse implication, let us assume that, for all f ∈ S(R d ), Denoting ϕ X the characteristic function of X, the equality (25) can be rewritten as Using standard Fourier arguments and the fact that f ∈ S(R d ), where the left-hand side has to be understood as a duality bracket between the Schwartz function F(f ) and the tempered distribution ξ; ∇(ϕ X ) . Since ϕ X is continuous on R d , for all ξ ∈ R d with ξ = 0 Moreover, ϕ X (0) = 1. Now, in order to solve the previous linear partial differential equation of order one, let us change the coordinates system (ξ 1 , . . . , ξ d ) into the hyper-spherical one (r, θ 1 , . . . , θ d−1 ) where r > 0, θ i ∈ [0, π], for all i ∈ {1, . . . , d − 2} and θ d−1 ∈ [0, 2π). Noting that and using the scaling property of the Lévy measure ν, i.e., (22), one gets For any fixed x ∈ S d−1 , this linear differential equation admits a unique solution which is given by since ϕ X (0) = 1. Then, X is a stable random vector in R d with parameter b, stability index α and Lévy measure ν.
This ensuing result deals with the Cauchy case.
Theorem 3.2. Let X be a random vector in R d . Let b ∈ R d and let ν be a Lévy measure on R d such that, for all c > 0 Moreover, let σ, the spherical part of ν, be such that Then, for all f ∈ S(R d ) if and only if X is a stable random vector in R d with parameter b, stability index α = 1 and Lévy measure ν.
Proof. The proof is similar to the one of Theorem 3.1. The direct part goes with a double truncation procedure together with an integration by parts and, then, passing to the limit. Let us first assume that X is stable with parameter b, stability index α = 1, Lévy measure ν and σ the spherical part. Then, [52, Theorem 14.3, (ii)], Let R > 1 be a truncation parameter, let and, let X R be the ID random vector defined through its characteristic function by Note, in particular, that X R is such that E X R < +∞. Then, by Theorem 2.1, for all g ∈ S(R d ), Performing computations similar to those in the proof of Theorem 3.1, for all f ∈ S R d , Now, since X R converges in distribution towards X, as R tends to +∞, Next, let us study the second term on the right-hand side of (27). First, since R > 1, From the polar decomposition of the Lévy measure ν R , Then, for all z ∈ R d , Setting H z (r) = S d−1 f (z + rx)σ(dx), for all r > 0 and all z ∈ R d , it follows that Thus, A standard integration by parts argument implies that Integrating with respect to the law of X R , one gets Then, since f ∈ S(R d ) and σ S d−1 < +∞, Let us now study the convergence, as R → +∞, of To this end, let F R and F be the bounded and continuous functions on R d respectively defined, by and by Now, note that, for all z ∈ R d and all R > 1, where, Then, by standard inequalities, since f ∈ S(R d ) and σ(S d−1 ) < +∞, which implies that F R converges uniformly to F , as R → +∞. Thus, and also Combining (27), (28) and (29), one obtains which is the direct part of the theorem. To prove the converse, assume that, for all f ∈ S(R d ), Denoting by ϕ X the characteristic function of X, the identity (30) can then be rewritten as Reasoning as in the proof of Theorem 3.1 gives, for all r > 0 and all x ∈ S d−1 , To conclude, note that the previous equality can be interpreted as an ordinary differential equation in the radial variable. Its solution is given, for all r ≥ 0 and all x ∈ S d−1 , by where G and J are defined, for all R > 0 and all x ∈ S d−1 , by Straightforward computations, and the fact that S d−1 xσ(dx) = 0, finally imply that which concludes the proof.
Remark 3.1. The quantity S d−1 xσ(dx) reflects the asymmetry of the Lévy measure ν. In case S d−1 xσ(dx) = 0, a careful inspection of the proof of Theorem 3.2 reveals that the identity (26) becomes, for all f ∈ S(R d ), The next results provide extensions of both Theorem 3.1 and Theorem 3.2 to subclasses of selfdecomposable distributions with regular radial part, on (0, +∞), and some specific asymptotic behaviors at the edges of (0, +∞) in any directions of S d−1 .
where σ is a finite positive measure on S d−1 and where k x (r) is a nonnegative continuous function decreasing in r ∈ (0, +∞), continuous in x ∈ S d−1 and such that Letν be the positive measure on R d defined bỹ Then, , for all f ∈ S(R d ) if and only if X is self-decomposable with parameter b, Σ = 0 and Lévy measure ν.
Proof. Let us start with the direct part. Let X be a SD random vector of R d with parameter b and Lévy measure ν such that u ≤1 u ν(du) < +∞ and whose polar decomposition is given by (31). Let R > 1 and let (σ n ) n≥1 be a sequence of positive linear combinations of Dirac measures which converges weakly to σ, the spherical component of ν. Then, for all R > 1 and all n ≥ 1, let and denote by X R,n the SD random vector with parameter b and Lévy measure ν R,n . Similarly, let, for all n ≥ 1, and denote by X n the SD random vector with parameter b and Lévy measure ν n . Performing computations similar to those in the proof of Theorem 3.1, for all f ∈ S(R d ), all R > 1 and all Now, since, as R → +∞, X R,n converges in distribution to X n , for all n ≥ 1, Moreover, from the polar decomposition of the Lévy measure ν R,n , mutatis mutandis, where, for all R > 1 and all n ≥ 1, Then, since lim Next, one needs to prove that whereν n is given, for all R > 1 and all n ≥ 1, bỹ To this end, for all R > 1, all n ≥ 1 and all z ∈ R d , set (33), and since f ∈ S(R d ), it is clear that both functions are well-defined, bounded and continuous on R d . Moreover, Thus, as R tends to +∞, F R,n converges to F n uniformly on R d , for all n ≥ 1. Finally, since X R,n converges in distribution to X n , for all n ≥ 1, Then, for all n ≥ 1 Now, observe that, (X n ) n≥1 converges in distribution to X since (σ n ) n≥1 converges weakly to σ and since +∞ 0 To conclude the proof of the direct part of the theorem, let us study the convergence of: Now, since (X n ) n≥1 converges in distribution to X, then lim n→+∞ ϕ n (ξ) = ϕ(ξ), for all ξ ∈ R d . In turn, let us prove the following: Observe that, for all ξ ∈ R d and all n ≥ 1 Since (σ n ) n≥1 converges weakly to σ, let us prove that the function H(x, ξ) = +∞ 0 The second term on the right-hand side of (35) converges to 0 as n tends to +∞, by the Lebesgue dominated convergence theorem since +∞ 0 (1 ∧ r)dk x (r) < +∞. For the first term of (35), observe that For the second term on the right-hand side of (36), for all n ≥ 1, so that by (32), this term converges to 0. Finally, integrating by parts, for all n ≥ 1, (1)).
Now, the second term on the right-hand side of (37) converges to 0, as n tends to +∞ and, by the Lebesgue dominated convergence theorem, the first term does converges to 0, as n tends to +∞. This proves that lim Now, reasoning as in the second part of the proof of Theorem 3.1, Let us develop the second term inside the above parenthesis a bit more. First, The radial equation (39) then becomes, for all r > 0 and all x ∈ S d−1 , For any fixed x ∈ S d−1 , this linear differential equation admits a unique solution which is given by since ϕ X (0) = 1. Then, X is a SD random vector with parameter b and Lévy measure ν.
The next result is the SD pendant of the Cauchy characterization obtained in Theorem 3.2.
Theorem 3.4. Let X be a random vector in R d . Let b ∈ R d and let ν be a Lévy measure on R d with polar decomposition where σ is a finite positive measure on S d−1 and where k x (r) is a nonnegative continuous function decreasing in r ∈ (0, +∞), continuous in x ∈ S d−1 , and such that Letν be the positive measure on R d defined bỹ Then, for all f ∈ S(R d ) if and only if X is self-decomposable with parameter b, Σ = 0 and Lévy measure ν.
Proof. The proof is a direct extension of the proof of Theorem 3.2 so that it is only outlined by highlighting the main differences. Let us start with the direct part. Let X be a SD random vector with parameter b and Lévy measure ν. Let R > 1 and let (σ n ) n≥1 be a sequence of positive linear combinations of Dirac measures converging weakly to σ, the spherical component of ν. Then, for all R > 1 and all n ≥ 1, let and denote by X R,n the SD random vector with parameter b and Lévy measure ν R,n . Similarly, for all n ≥ 1, let and denote by X n the SD random vector with parameter b and Lévy measure ν n . As in the proof of Theorem 3.2, for all f ∈ S R d and all R > 1, Now, since, as R → +∞, X R,n converges in distribution to X n , for all n ≥ 1 Moreover, for all R > 1 and all n ≥ 1, From the limiting behavior of k x at +∞ and at 0 + , for all n ≥ 1, Next, consider the term defined, for all z ∈ R d and all n ≥ 1, by By a standard integration by parts, for all n ≥ 1, Then, observe that, for all x ∈ S d−1 , lim (1), and, for all n ≥ 1, Finally, for all n ≥ 1, Now, since (σ n ) n≥1 converges weakly to σ and since +∞ 0 converges in distribution to X. Hence, To conclude the direct part of the proof, let us consider the following terms: First, for all n ≥ 1 Since (X n ) n≥1 converges in distribution to X, as n tends to +∞, (ϕ n (ξ)) n≥1 converges to ϕ(ξ), for all ξ ∈ R d . Moreover, Then, by the Lebesgue dominated convergence theorem, Similarly, for all n ≥ 1, and proceeding as in the proof of Theorem 3.3, The direct part of the theorem is proved. For the converse part, mutatis mutandis, based on (42), for all r > 0 and all x ∈ S d−1 whereG andJ are respectively defined, for all R > 0 and all x ∈ S d−1 , bỹ Finally, straightforward computations together with Fubini's Theorem and the fact that lim Remark 3.2. (i) Let us recast the previous results in dimension one, i.e., for d = 1. In this case, the Lévy measure of a SD law is absolutely continuous with respect to the Lebesgue measure and is given by where k is a nonnegative function increasing on (−∞, 0) and decreasing on (0, +∞). Now, assume, for simplicity only, that k is continuously differentiable on (−∞, 0) and on (0, +∞) and that Theorem 3.4 gives the following characterizing identity when X is a SD random variable with parameter b ∈ R and Lévy measure ν: In a similar fashion, it is possible to provides a characterization result for SD random variables with Lévy measure ν such that |u|≤1 |u|ν(du) < +∞ and such that k is continuously differentiable on (−∞, 0) and on (0, +∞) with via Theorem 3.3.
(ii) From [52,Theorem 28.4], under the assumptions that the function k is continuously differentiable on (−∞, 0) and on (0, +∞) and satisfies (43) Then, the associated SD distribution admits a Lebesgue density infinitely differentiable on R. If the function k is continuously differentiable on (−∞, 0) and on (0, +∞) and satisfies (44), then c can be either finite or infinite, implying different types of regularity for the Lebesgue density of the associated SD distribution.
(iii) Let X be a SD random vector with Lévy measure ν as in Theorem 3.4 and such that u ≥1 u ν(du) < +∞. Then, integrating by parts, for all f ∈ S(R d ), Let us now present a simple example for which Theorem 3.3 and Theorem 3.4 apply and which is not covered in the relevant existing literature. Rotationally invariant self-decomposable distributions are covered by Theorem 3.3 and Theorem 3.4. Indeed, let λ be the uniform measure on S d−1 and let with u ≤1 u ν(du) < +∞ and with k satisfying the assumptions of Theorem 3.3. Then, the corresponding self-decomposable distribution is rotationally invariant.

The Stein Equation for Self-Decomposable Laws
Throughout this subsection, X is a non-degenerate self-decomposable random vector in R d , without Gaussian component, with law µ X , characteristic function ϕ given by (3) with parameter b ∈ R d and Lévy measure ν given by where k x (r) is a nonnegative function decreasing in r ∈ (0, +∞) and where σ is a finite positive measure on S d−1 . The following assumptions are assumed to hold true throughout this subsection: These assumptions insure that the positive measureν given bỹ is a well defined Lévy measure on R d . Let us introduce next a collection of ID random vectors, X t , t ≥ 0, defined through their characteristic function, for all t ≥ 0 and all ξ ∈ R d , by By changing variables, this function is, for all ξ ∈ R d and all t ≥ 0, equal to which is a well-defined characteristic function since X is SD. Denoting by µ t the law of X t , let us introduce the following continuous family of operators ( For t = 0, set µ 0 = δ 0 , with δ 0 the Dirac measure at 0, so that P ν 0 is the identity operator. Based on the computations of [2, Lemma 3.1], observe that the continuous family of operators ( The next lemma identifies the generator of (P ν t ) t≥0 on S R d .
be the semigroup of operators defined by (49). Letν be the Lévy measure on R d given by (47). The generator of (P ν t ) t≥0 is given, for all f ∈ S(R d ) and all x ∈ R d , by Proof. Let f ∈ S(R d ). By Fourier inversion, for all x ∈ R d and all t ∈ (0, 1), Then, a direct application of the Lebesgue dominated convergence theorem together with Fourier duality imply that which concludes the proof of the lemma.
Based on the previous lemma, it is natural to consider the following Stein equation for selfdecomposable distributions with polar decomposition given by (45) (under appropriate assumptions on the function k x (r)) : . By semigroup theory, a candidate solution to (50) is given by, The next proposition proves the existence of the function f h given by (51), studies its regularity and proves that this function is a strong solution of (50) on R d .
Theorem 4.1. Let X be a non-degenerate SD random vector without Gaussian component, with law µ X , characteristic function ϕ, Lévy measure ν having polar decomposition given by (45) where the function k x (r) is continuous in r ∈ (0, +∞), continuous in x ∈ S d−1 and satisfies (46). Assume that there exists ε ∈ (0, 1) such that E X ε < +∞ and that there exist β 1 > 0, β 2 > 0 and β 3 ∈ (0, 1) such that the function k x (r) in (45) satisfies and that, Let X t , t ≥ 0, be the random vector defined through the characteristic function ϕ t given by (48) and assume that, Let (P ν t ) t≥0 be the semigroup of operators defined by (49). Then, for any h ∈ H 1 , the function f h , given, for all x ∈ R d , by is well defined and continuously differentiable on whereν is given by (47) and whereb = b − S d−1 k y (1)yσ(dy).
Proof. To start with, let us prove that, for any h ∈ H 1 , the function f h defined by (51) does exist.
For all x ∈ R d and all t > 0, where we have used Proposition A.1 in the last line. Then, the function f h is well defined on R d .
Moreover, reasoning as in [2, Proposition 3.4], one gets that and with M 2 (f h ) ≤ 1/2. To conclude let us prove that f h is a strong solution of (50) on R d . Set u(t, x) = P ν t (h)(x), for t ≥ 0 and x ∈ R d . First, let us prove that, for all t ≥ 0 and all x ∈ R d , Since h ∈ C ∞ c R d , by Fourier inversion, for all t ≥ 0 and all x ∈ R d , Moreover, for all x ∈ R d , all ξ ∈ R d and all t ≥ 0, Then, for all t ≥ 0 and all x ∈ R d , where the Fourier symbol of A and the Fourier representation of u(t, x) have been used in the last equality. To pursue, let 0 < T < +∞ and let us integrate out the equation (53) between 0 and T . Then, for all x ∈ R d , then, letting T → +∞ and the ergodicity of the semigroup (P ν t ) t≥0 give: Next, let us prove that +∞ 0 |A(P ν t (h))(x)| dt < +∞, for all x ∈ R d . To do so, one needs to estimate ∇(P ν t (h))(x) and From the commutation relation and the fact that h ∈ H 2 ,

Now, let us bound (I).
For all x ∈ R d and all t > 0 Let us start with the second term on the right-hand side of (54). Again, via the commutation relation and an integration by parts Note also that +∞ 0 k y (e t )dt = +∞ 1 k y (r)dr/r < +∞, for y ∈ S d−1 . This concludes the bounding of the second term on the right-hand side of (54). For the first term, for all x ∈ R d and all t ≥ 0 where, Then, by commutation and a change of variables ; ry (−dk y (e t r))σ(dy) .
Next, let us discuss the condition (52) in the particular case α ∈ (0, 1). (A similar discussion can be performed in the case α = 1 but requires different estimates. ) Since α ∈ (0, 1), the random vector X t , t ≥ 0, defined through (48) has the characteristic function given, for all ξ ∈ R d and all t ≥ 0, by with ν as in (6). Then, 1 αX whereX is α-stable with b 0 = 0 and α ∈ (0, 1). It is then straightforward to check that E X t ε is uniformly bounded in t for any ε ∈ (0, α).
(ii) Now, let X be a non-degenerate SD random vector as in Theorem 4.1 such that Let f h be the solution to the Stein equation (50) defined by (51), for h ∈ H 2 ∩ C ∞ c (R d ). Then, by an integration by parts, for all so that, in this case, f h is a solution to the following Stein equation In particular, if X is α-stable with α ∈ (0, 1), then the equation (58) boils down to (iii) Next, let X be a non-degenerate SD random vector as in Theorem 4.1 and such that Let f h be the solution to the Stein equation (50) defined by (51), for h ∈ H 2 ∩ C ∞ c (R d ). Then, integrating by parts twice, for all so that f h is a solution to the following Stein equation

Applications to Functional Inequalities for SD Random Vectors
This section discusses Poincaré-type inequalities for self-decomposable random vectors, providing in particular new proofs based on the semigroup of operators (P ν t ) t≥0 defined in (49). This proof is in line with the standard proof of the Gaussian Poincaré inequality based on the differentiation of the variance along the Ornstein-Uhlenbeck semigroup. In the literature, standard references regarding Poincaré-type inequalities for infinitely divisible random vectors are [17,32]. In [17], the proof is based on stochastic calculus for Lévy processes and the Lévy-Itô decomposition whereas in [32], the proof is based on a covariance representation for infinitely divisible random vectors. Let us also mention that Poincaré-type inequalities for stable random vectors have been obtained in [49,55].
Proposition 5.1. Let X be a centered SD random vector with Lévy measure ν such that where σ is a finite positive measure on S d−1 and where k x (r) is a nonnegative continuous function decreasing in r ∈ (0, +∞), continuous in x ∈ S d−1 with lim r→+∞ k x (r) = 0, lim Then, for all f ∈ C ∞ c R d with Ef (X) = 0 Proof. Let X be a SD random vector with characteristic function ϕ and Lévy measure ν satisfying the hypotheses of the proposition. Let (P ν t ) t≥0 be the semigroup of operators given by (49). In particular, on C ∞ c R d , for all t ≥ 0 and all x ∈ R d , This operator admits the Fourier representation, i.e., for all x ∈ R d , Next, let f ∈ C ∞ c R d be such that Ef (X) = 0. Then, for all t ≥ 0, where A is defined, for all f ∈ C ∞ c R d and all x ∈ R d , by Thus, for all t ≥ 0 Next, from Theorem 2.1, observe that, for all f ∈ C ∞ c R d and all t ≥ 0, and so, for all t ≥ 0, Next, using Fourier arguments as in the proof of [33,Proposition 4.1], for all f ∈ C ∞ c R d and all x ∈ R d , Moreover, an integration by parts in the radial coordinate gives, for all ξ ∈ R d and thus, for all x ∈ R d , Then, for all t ≥ 0 But, from a change of variables, Jensen inequality and invariance, With an integration by parts, observe that, for x ∈ S d−1 , Finally, integrating with respect to t (between 0 and +∞) leads to Remark 5.1. (i) Let X be a rotationally invariant α-stable random vector, α ∈ (1, 2), with characteristic function ϕ given by Then, by Proposition 5.1, for all f ∈ S(R d ) with Ef (X) = 0 where c α,d = −α(α − 1)Γ((α + d)/2) 4 cos(απ/2)Γ((α + 1)/2)π (d−1)/2 Γ(2 − α) .
(ii) Throughout the proof of Proposition 5.1, the following integration by parts formula has been obtained and used, for all f ∈ C ∞ c (R d ), where µ X is the law of X and Γ is a bilinear symmetric application defined, for all f, g ∈ C ∞ c (R d ) and all x ∈ R d , by with σν(ξ, ζ) = R d e i u;ξ − 1 e i u;ζ − 1 ν(du), for ξ, ζ ∈ R d . A straightforward computation in the Fourier domain shows that this bilinear symmetric application is the "carré du champs" operator associated with the generator A of the semigroup (P ν t ) t≥0 (see, e.g., [6] for a thorough exposition of these topics in the setting of Markov diffusion operators). Namely, for all f, g ∈ C ∞ c (R d ) and all x ∈ R d , Standard objects of interest in the setting of Markov diffusion operators are iterated "carré du champs" of any orders n ≥ 1 defined through the following recursive formula, for all f, g ∈ C ∞ c (R d ) and all with the convention that Γ 0 (f, g)( , and x ∈ R d . The forthcoming simple lemma provides a representation of the Γ 2 as a pseudodifferential operator whose symbol is completely explicit. where σ is a finite positive measure on S d−1 and where k x (r) is a nonnegative continuous function decreasing in r ∈ (0, +∞), continuous in x ∈ S d−1 with Let A be the operator defined, for all f ∈ S(R d ) and all x ∈ R d , by Then, for all f, g ∈ S(R d ) and all x ∈ R d where σν(ξ, ζ) and ρν(ξ, ζ) are given, for all ξ, ζ ∈ R d , by σν(ξ, ζ) = Proof. First, by definition, for all f, g ∈ S(R d ) and all x ∈ R d , Let us compute Γ 1 (A(f ), g)(x). Using the Fourier representation, for all x ∈ R d , so that, for all ξ ∈ R d , Thus, for all x ∈ R d , Similarly, for all x ∈ R d , At first, observe that, Next, by straightforward computations, Then, for all x ∈ R d , where, for all ξ, ζ ∈ R d ρν (ξ, ζ) = This concludes the proof of the lemma.
The next proposition asserts that the Bakry-Emery criterion still holds for the rotationally invariant α-stable distribution with α ∈ (1, 2).
Proposition 5.2. Let α ∈ (1, 2) and let X α be a rotationally invariant α-stable random vector of R d with law µ α and with Lévy measure given by where λ is the uniform measure on S d−1 and where .
where Γ and Γ 2 are respectively the "carré du champs" operator and the iterated "carré du champs" operator of order 2 associated with ν α .
Proof. By Remark 5.2, observe that, for all ξ, ζ ∈ R d , where, Then, by Lemma 5.1 and Fourier inversion, for all f ∈ C ∞ c R d , and all x ∈ R d , Thus, for all f ∈ C ∞ c R d and all x ∈ R d , Let us study rigidity and stability phenomena for the rotationally invariant α-stable distributions with α ∈ (1, 2) based on the Poincaré-type inequality of Proposition 5.1. To reach such results let us adopt a spectral point of view. This is a natural strategy to obtain sharp forms of geometric and functional inequalities as done, e.g., in [12,23,13]. First, observe that, since α ∈ (1, 2) and since X α considered in Proposition 5.2 is centered, the function g(x) = x, x ∈ R d , is an eigenfunction of the semigroup of operators (P ν t ) t≥0 given in (49) with ν = ν α as in (19). Indeed, for all x ∈ R d and all t ≥ 0 Then, by its very definition, A(g)(x) = −g(x), for x ∈ R d , so that g is an eigenfunction of A with associated eigenvalue −1. However, since α ∈ (1, 2), g does not belong to L 2 (µ α ), with µ α being the law of X α . To circumvent this fact, let us build an optimizing sequence by a smooth truncation procedure. For all j ∈ {1, . . . , d} and all R ≥ 1, let g R,j be defined, for all x ∈ R d , by with ψ ∈ S(R d ), ψ(0) = 1 and 0 ≤ ψ(x) ≤ 1, for x ∈ R d . Take, for instance, ψ(x) = exp(− x 2 ), for x ∈ R d . Now, let us state some straightforward facts about the functions g R,j : for all j ∈ {1, . . . , d}, Eg R,j (X α ) = 0 and, as R → +∞, Next, by studying precisely the rate at which both the last two terms diverge, we intend to prove that, for all j ∈ {1, . . . , d}, The first technical lemma investigate the rate at which Eg R,j (X α ) 2 diverges as R tends to +∞.
, for x ∈ R d . Let α ∈ (1, 2) and X α be a rotationally invariant α-stable random vector of R d with characteristic function ϕ given, for all ξ ∈ R d , by Then, for all j ∈ {1, . . . , d}, as R tends to +∞, where Proof. First, for all R ≥ 1, set ψ R (x) = ψ(x/R), for x ∈ R d , and, for all j ∈ {1, . . . , d}, where X α is a rotationally invariant α-stable random vector with α ∈ (1, 2) and X α,j is its j-th coordinate. By Fubini's theorem, standard Fourier analysis, two integrations by parts and a change of variables, it follows that where ∂ 2 ξ j is the partial derivative of order 2 in the ξ j coordinate. Moreover, since α ∈ (1, 2) and since ψ 2 ∈ S(R d ), all the following integrals converge Hence, as R −→ +∞, which concludes the proof of the lemma.
This second technical lemma provides the rate of divergence, as R tends to +∞, of for all j ∈ {1, . . . , d}. ∈ (1, 2) and X α be a rotationally invariant α-stable random vector of R d with characteristic function ϕ given, for all ξ ∈ R d , by Then, for all j ∈ {1, . . . , d}, as R tends to +∞, where g R,j (x) = x j ψ(x/R), for x ∈ R d , and where Γ is the "carré du champs" operator associated with X α .
From the above lemma, and from a spectral point of view, the correct functional to observe rigidity phenomenon for the rotationally invariant α-stable distribution, α ∈ (1, 2), is the functional defined, for all µ ∈ M 1 (R d ) (M 1 (R d ) is the set of probability measures on R d ), by where X ∼ µ and where H α is the set of functions f from R d to R such that Var(f (X)) < +∞ and 0 < E R d |f (X + u) − f (X)| 2 ν α (du) < +∞. Therefore, the next result is a direct consequence of the Poincaré-type inequality for the rotationally invariant α-stable distribution, α ∈ (1, 2), and of the existence of an optimizing sequence as built above.
To continue, let us state and prove a converse to the above corollary. In particular, note that, for all j ∈ {1, . . . , d}, Indeed, this is a direct consequence of (67) and (69) since the divergent terms cancel out and the remaining terms converge to 0 as R → +∞.
To end this section, let us investigate stability results for rotationally invariant α-stable laws. A natural strategy to reach stability put forward in [22,2] is to use Stein kernels. This strategy relies on the Lax-Milgram theorem to ensure the existence of Stein kernels under appropriate assumptions. More precisely, the Stein kernel is seen as the solution to a variational problem linked to the covariance identity characterizing the target probability measure. In the sequel, we develop an approach based on Dirichlet forms to obtain the existence of Stein kernels. Adopting the notations, the definitions and the terminology of [26, Chapter 1], let us start with an abstract result which then leads to the existence of Stein kernels in known and in new situations. Note that this result as well as its geometric generalizations and consequences will be further analyzed in the ongoing work [3].
Theorem 5.1. Let H be a real Hilbert space with inner product ·; · H and induced norm · H . Let E be a closed symmetric non-negative definite bilinear form in the sense of [26] with dense linear domain D(E). Let {G α : α > 0} and {P t : t > 0} be, respectively, the strongly continuous resolvent and the strongly continuous semigroup on H associated with E. Moreover, assume that, there exists a closed linear subspace H 0 ⊂ H such that, for all t > 0 and all u ∈ H 0 , for some C P > 0 independent of u and of t. Let G 0 + be the operator defined by where the above integral is to be understood in the Bochner sense. Then, for all u ∈ H 0 , G 0 + (u) belongs to D(E) and, for all v ∈ D(E), Moreover, for all u ∈ H 0 , Proof. First, from [26, Theorem 1.3.1], there is a one to one correspondence between the family of closed symmetric forms on H and the family of non-positive definite self-adjoint operators on H. Then, let A, {G α : α > 0} and {P t : t > 0} be, respectively, the generator, the strongly continuous resolvent and the strongly continuous semigroup on H associated with E such that, for all α > 0 and all u ∈ H, (Again the above integral is to be understood in the Bochner sense.) Then, from [26, Lemma 1.3.3], for all α > 0, all u ∈ H and all v ∈ D (E), Then, in order to establish (74) from (76), one needs to pass to the limit in (76) as α −→ 0 + . First, since (72) holds, G 0 + given by (73) is well defined on H 0 . Moreover, for all α > 0 and all u ∈ H 0 , Then, G α (u) converges strongly in H to G 0 + (u), as α tends to 0 + . It therefore follows that, for all u ∈ H 0 and all v ∈ H, Next, let us prove that, for all u ∈ H 0 , First, note that, for all α, β > 0, Then, from (76), and similarly for E(G β (u), G β (u)), as β tends to 0 + . Now, for the crossed term, The closedness of E then ensures that G 0 + (u) belongs to D(E) and that This gives (74), while the inequality (75) follows from (74), the Cauchy-Schwarz inequality, the triangle inequality and (72), concluding the proof of the theorem.
The next remark explores how the absract Theorem 5.1 recovers various known results and provides new ones.
Remark 5.3. (i) First, let γ be the centered Gaussian probability measure on R d with the identity matrix as its covariance matrix. Let H be the space of R d -valued square-integrable functions on R d with respect to γ, let H 0 be the functions in H with mean 0 with respect to γ and let E be the symmetric non-negative definite bilinear form defined, for all f, g ∈ C ∞ c (R d , R d ), by where ·; · HS is the Hilbert-Schmidt product for real matrices of size d × d. It is a standard fact of Gaussian analysis that the above form is closable and its closed extension gives rise to the Ornstein-Uhlenbeck generator and its semigroup. Moreover, note that the function, h(x) = x, x ∈ R d , belongs to H 0 and that γ satisfies the following Poincaré inequality: for all smooth f : Then, by Theorem 5.1, for all f ∈ D(E) where G 0 + (h) is given, for all x ∈ R d , by where div is the standard divergence operator. Thus, (78) is the integration by parts formula associated with γ.
(ii) Let µ be a centered probability measure on R d with finite second moment such that, for all smooth f : for some C P > 0 independent of f . Moreover, assume that the bilinear symmetric non-negative is closable (sufficient conditions for the closability of the above form have been addressed in [26, Chapter 3.1] and in [11,Chapter 2.6]). Note that the function h defined by, h(x) = x, x ∈ R d , belongs to H, the space of square integrable functions on R d with respect to µ, and that R d h(x)µ(dx) = 0. Then, by Theorem 5.1, for all f ∈ D(E), so that a Gaussian Stein kernel of µ exists (in the sense of [22, Definition 2.1]) and is given by Moreover, with X ∼ µ, (75) reads Thus, one retrieves the results of [22]. (iii) Let α ∈ (1, 2) and let µ α be a rotationally invariant α-stable probability measure on R d with Lévy measure defined by where c α,d is given by (20). Let H be the space of square-integrable functions on R d with respect to µ α . Let E be the symmetric non-negative definite bilinear form defined, for all f, g ∈ C ∞ c (R d ), by Since ν α * µ α is absolutely continuous with respect to µ α , it is standard to check that the above form is closable and its smallest closed extension gives rise (see [26,Theorem 1.3.1]) to a non-positive definite self-adjoint operator A on H with corresponding symmetric contractive semigroup (P t ) t>0 on H. Moreover, from Theorem 5.1, for all smooth f : where H 0 is the space of square-integrable functions on R d with respect to µ α having mean zero. Then, by Theorem 5.1, for all f ∈ D (E) and all h ∈ H 0 Next, observe that the function h(x) = x, for x ∈ R d , does not belong to L 2 (µ α ).
The next technical lemma describes the link between the semigroup of operators obtained from the form E given by (79) and the semigroup of operators (P ν t ) t≥0 given by (49) with ν = ν α as in (19) and with α ∈ (1, 2). With the help of this lemma, it is then possible to obtain the spectral properties of this semigroup of symmetric operators based on those of (P ν t ) t≥0 .
Lemma 5.4. Let α ∈ (1, 2), let ν α be the Lévy measure given by (19) and let µ α be the corresponding rotationally invariant α-stable probability measure on R d . Let E be the smallest closed extension of the symmetric non-negative definite bilinear form given by (79). Let (P t ) t>0 be the strongly continuous semigroup of symmetric contractions on L 2 (µ α ) associated with E. Let (P να t ) t≥0 be the semigroup of operators defined by (49) and let ((P να t ) * ) t≥0 be its dual semigroup in L 2 (µ α ). Then, for all f ∈ L 2 (µ α ) and all t > 0, Moreover, for all x ∈ R d and all t > 0, Proof. Since the form E is the smallest closed extension of the bilinear symmetric non-negative definite form, given by (79) where D(A) is the domain of the operator A. Let us denote by (P t ) t>0 the corresponding strongly continuous semigroup on L 2 (µ α ) whose existence and uniqueness is ensured by [26,Lemma 1.3.2]. Now, recall that the semigroup of operators (P να t ) t≥0 extends to every L p (µ α ), p ≥ 1, as seen using the representation (49) and the bound, Moreover, it is a C 0 -semigroup on L p (µ α ) and its L p (µ α )-generator A α,p coincides with A α on S(R d ) which is now defined, for all f ∈ S(R d ) and all x ∈ R d , by and for which the following integration by parts formula holds, Then, by polarization, for all f, g ∈ S(R d ), Moreover, since S(R d ) is dense in L 2 (µ α ), the adjoint of A α,2 is uniquely defined so that, for all f ∈ S(R d ) and all g ∈ S(R d ) ∩ D A * α,2 , where D(A * α,2 ) is the domain of the operator A * α,2 . Then, S(R d ) ∩ D A * α,2 ⊂ D(A) and, for all f ∈ S(R d ) ∩ D A * α,2 and all x ∈ R d , Thus, which implies (thanks to [46,Theorem X.51]), for all t > 0 and all f ∈ L 2 (µ α ), where (P να t nα ) t≥0 is the extension to L 2 (µ α ) of the semigroup of operators given by (49), after the time change t → t/(nα), while ((P να t nα ) * ) t≥0 is its dual semigroup in L 2 (µ α ) (see, e.g., [42,Chapter 1.10]). Next, by Fourier duality and since α ∈ (1, 2), for all f ∈ S(R d ) and all j ∈ {1, . . . , d}, This implies that, for all j ∈ {1, . . . , d} and for all t ≥ 0, where g j (x) = x j , for x ∈ R d . This last observation concludes the proof of the lemma.
The following long remark summarizes some basic properties of the semigroups.
(iv) As noticed above, for j ∈ {1, . . . , d}, the functions g j (x) = x j , x ∈ R d , do not belong to L 2 (µ α ) so that Theorem 5.1 does not directly apply with u = g j . To circumvent this fact, one can apply a smooth truncation procedure as in (iii). Thus, by Theorem 5.1, for all R ≥ 1, all j ∈ {1, . . . , d} and all f ∈ D(E), and, as R −→ +∞, for all f bounded on R d . Moreover, from Lemma 5.4, for all j ∈ {1, . . . , d} and all x ∈ R d , G 0 + (g j )(x) = x j . Then, since µ α * ν α << µ α , as R −→ +∞, for all f bounded and Lipschitz on R d , Putting together these last two facts into (82) gives, for all f bounded and Lipschitz on R d , Next, let us state a result ensuring the existence of a Stein kernel with respect to the rotationally invariant α-stable distributions, α ∈ (1, 2), for appropriate probability measures on R d . Before doing so, recall that a closed, symmetric, bilinear, non-negative definite form on L 2 (µ) is said to be Markovian if [26, (E.4)] holds. Now, from [26,Theorem 1.4.1], this is equivalent to the fact that the corresponding semigroup P t is Markovian for all t > 0, namely, for all 0 ≤ f ≤ 1, µ-a.e., then 0 ≤ P t (f ) ≤ 1, µ-a.e.
To finish this section, a stability result for probability measures on R d close to the rotationally invariant α-stable ones, α ∈ (1, 2), is presented. ∈ (1, 2), let ν α be the Lévy measure given by (19) and let µ α be the associated rotationally invariant α-stable distribution. Let β ∈ (1, α) and let µ be a centered probability measure on R d with R d x β µ(dx) < +∞ and with µ * ν α << µ. Let E µ be the closable, Markovian, symmetric, bilinear, non-negative definite form defined, for all f, g ∈ S(R d ), by Moreover, assume that, Then, for some C α,d > 0 only depending on α and on d.
Proof. The proof partly relies on the methodological results contained in [2]. First, as in [2,Proposition 3.4], for all h ∈ H 1 ∩ C ∞ c (R d ), let f h , be defined, for all x ∈ R d , by with (P να t ) t≥0 given in (49) with ν = ν α and X α ∼ µ α . Next, let X ∼ µ. Then, for all h ∈ where g j (x) = x j , x ∈ R d and j ∈ {1, . . . , d}. Let g R,j be the smooth truncation of g j as defined by (65) with ψ(x) = exp(− x 2 ), x ∈ R d . Moreover, g j − g R,j L p (µ) → 0, as R tends to +∞, for all p ≤ β. Since, (see [2,Proposition 3.4]) Now, for all j ∈ {1, . . . , d} and all R ≥ 1, Cutting the integral on u into a small jumps part and a big jumps part and using M 1 (f h ) ≤ 1 and M 2 (f h ) ≤ C α,d , for some C α,d > 0 depending only on α and on d, imply Since g j − g R,j L p (µ) → 0, as R tends to +∞, and since µ * ν α << µ, along a subsequence, Now, for all x ∈ R d , all u ∈ R d , all R ≥ 1 and all j ∈ {1, . . . , d}, for some constant C j,d > 0 depending only on j and on d. Thus, Lebesgue's dominated convergence theorem implies that, for all j ∈ {1, . . . , d}, along a subsequence Finally, for all R ≥ 1 and all j ∈ {1, . . . , d}, The first term on the right hand-side of (87) is bounded, for all R ≥ 1 and all j ∈ {1, . . . , d}, by To conclude the proof, let us deal with the second term on the right-hand side of (87). Then, by the Cauchy-Schwarz inequality, for all j ∈ {1, . . . , d} and all R ≥ 1, Now, for all j ∈ {1, . . . , d}, The condition (86) concludes the proof of the theorem.

A Appendix
Lemma A.1. Let ν be a Lévy measure with polar decomposition given by (45) where the function k x (r) is continuous in r ∈ (0, +∞), is continuous in x ∈ S d−1 and satisfies (46). Then, for all ξ ∈ R d , the function t → ψ t (ξ) where, is continuously differentiable on [0, +∞) and for all ξ ∈ R d and all t ≥ 0, whereν is given by (47).
Proof. First, for all ξ ∈ R d and all t ≥ 0, Now, observe that, for all ξ ∈ R d and all t ≥ 0 Then, by Leibniz's integral rule, for all ξ ∈ R d and all t ≥ 0 Moreover, by Fubini's theorem, for all ξ ∈ R d and all t ≥ 0 Then, by Leibniz's integral rule, for all ξ ∈ R d and all t ≥ 0 x; ξ (e t k x (e t ) − k x (1))σ(dx).
Thus, for all ξ ∈ R d and all t ≥ 0 x; ξ (e t k x (e t ) − k x (1))σ(dx) Finally, straightforward computations conclude the proof of the lemma.
Proof. The strategy of the proof is similar to the one of [2, Theorem A.1] but without the first moment assumption. The proof of [2, Theorem A.1] is divided into 3 steps; the last two depending on the finiteness of the first moment. First of all, from Step 1 of the proof of [2, Theorem A.1], for Z and Y two random vectors of R d and for all r ≥ 1, while H r is the set of functions which are r-times continuously differentiable on R d such that D α (f ) ∞ ≤ 1, for all α ∈ N d with 0 ≤ |α| ≤ r.
Step 2 : This last step also follows the lines of the proof of Step 3 of [2, Theorem A.1] so that only the main differences are highlighted. Let h ∈ C ∞ c (R d ) ∩ H d+3 . Let Ψ R be a compactly supported infinitely differentiable function on R d , with support contained in the closed Euclidean ball centered at the origin and of radius R + 1, with values in [0,1], and such that Ψ R (x) = 1, for all x such that x ≤ R. First, for all t > 0 A similar bound holds true for |Eh(X)(1 − Ψ R (X))| since E X ε < +∞. Then, combining (96) together with the previous bounds implies for someC d,ε > 0 depending only on d and on ε. Next, as in Step 3 of the proof of [2, Theorem A.1], observe that hΨ R ∞ , ∂ d+1 j (hΨ R ) ∞ , ∂ d+2 j (hΨ R ) ∞ and ∂ d+3 j (hΨ R ) ∞ are uniformly bounded in R and in h for R ≥ 1 and since h ∈ C ∞ c (R d ) ∩ H d+3 (for an appropriate choice of Ψ R ). The last step is an optimization in R which depends on the behavior of R ε +C d,1 b e −t (R + 1) d + 2C d,2 (R + 1) d γ 1 e −β1t +C d,3 (R + 1) d γ 2 e −β2t +C d,4 (R + 1) d γ 3 e −β3t , for someC d,ε > 0,C d,1 > 0,C d,2 > 0,C d,3 > 0 andC d,4 > 0. Set β = min (1, β 1 , β 2 , β 3 ). Choosing R = e βt/(d+1) and reasoning as in the last lines of [2, Theorem A.1] concludes the proof of the proposition.