Multivariate scale-mixed stable distributions and related limit theorems

In the paper, multivariate probability distributions are considered that are representable as scale mixtures of multivariate elliptically contoured stable distributions. It is demonstrated that these distributions form a special subclass of scale mixtures of multivariate elliptically contoured normal distributions. Some properties of these distributions are discussed. Main attention is paid to the representations of the corresponding random vectors as products of independent random variables. In these products, relations are traced of the distributions of the involved terms with popular probability distributions. As examples of distributions of the class of scale mixtures of multivariate elliptically contoured stable distributions, multivariate generalized Linnik distributions are considered in detail. Their relations with multivariate `ordinary' Linnik distributions, multivariate normal, stable and Laplace laws as well as with univariate Mittag-Leffler and generalized Mittag-Leffler distributions are discussed. Limit theorems are proved presenting necessary and sufficient conditions for the convergence of the distributions of random sequences with independent random indices (including sums of a random number of random vectors and multivariate statistics constructed from samples with random sizes) to scale mixtures of multivariate elliptically contoured stable distributions. The property of scale-mixed multivariate stable distributions to be both scale mixtures of a non-trivial multivariate stable distribution and a normal scale mixture is used to obtain necessary and sufficient conditions for the convergence of the distributions of random sums of random vectors with both infinite or finite covariance matrices to the multivariate generalized Linnik distribution.

vectors as products of independent random variables and vectors. In these products, relations of the distributions of the involved terms with popular probability distributions are traced.
As examples of distributions of the class of scale mixtures of multivariate stable distributions, multivariate generalized Linnik distributions and multivariate generalized Mittag-Leffler distributions are considered in detail. Limit theorems are proved presenting necessary and sufficient conditions for the convergence of the distributions of random sequences with independent random indices (including sums of a random number of random vectors and multivariate statistics constructed from samples with random sizes) to scale mixtures of multivariate elliptically contoured stable distributions. As particular cases, conditions are obtained for the convergence of the distributions of random sums of random vectors with covariance matrices to the multivariate generalized Linnik distribution.
Along with general multiplicative properties of the class of scale mixtures of multivariate stable distributions, some important and popular special cases are considered in detail. Multivariate analogs of the Mittag-Leffler distribution are proposed. We study the multivariate (generalized) Linnik and related (generalized) Mittag-Leffler distributions, their interrelation and their relations with multivariate 'ordinary' Linnik distributions, multivariate normal, stable and Laplace laws as well as with univariate 'ordinary' Mittag-Leffler distributions. Namely, we consider mixture representations for the multivariate generalized Mittag-Leffler and multivariate generalized Linnik distributions. We continue the research we started in [2][3][4][5]. In most papers (see, e.g., [6][7][8][9][10][11][12][13][14][15][16][17][18]), the properties of the (multivariate) generalized Mittag-Leffler and Linnik distributions were deduced by analytical methods from the properties of the corresponding probability densities and/or characteristic functions. Instead, here we use the approach which can be regarded as arithmetical in the space of random variables or vectors. Within this approach, instead of the operation of scale mixing in the space of distributions, we consider the operation of multiplication in the space of random vectors/variables provided the multipliers are independent. This approach considerably simplifies the reasoning and makes it possible to notice some general features of the distributions under consideration. We prove mixture representations for general scale mixtures of multivariate stable distributions and their particular cases in terms of normal, Laplace, generalized gamma (including exponential, gamma and Weibull) and stable laws and establish the relationship between the mixing distributions in these representations. In particular, we prove that the multivariate generalized Linnik distribution is a multivariate normal scale mixture with the univariate generalized Mittag-Leffler mixing distribution and, moreover, show that this representation can be used as the definition of the multivariate generalized Linnik distribution. Based on these representations, we prove some limit theorems for random sums of independent random vectors with covariance matrices. As a particular case, we prove some theorems in which the multivariate generalized Linnik distribution plays the role of the limit law. By doing so, we demonstrate that the scheme of geometric (or, in general, negative binomial) summation is not the only asymptotic setting (even for sums of independent random variables) in which the multivariate generalized Linnik law appears as the limit distribution.
In [2], we showed that along with the traditional and well-known representation of the univariate Linnik distribution as the scale mixture of a strictly stable law with exponential mixing distribution, there exists another representation of the Linnik law as the normal scale mixture with the Mittag-Leffler mixing distribution. The former representation makes it possible to treat the Linnik law as the limit distribution for geometric random sums of independent identically distributed random variables (random variables) in which summands have very large variances. The latter normal scale mixture representation opens the way to treating the Linnik distribution as the limit distribution in the central limit theorem for random sums of independent random variables in which summands have finite variances. Moreover, being scale mixtures of normal laws, the Linnik distributions can serve as the one-dimensional distributions of a special subordinated Wiener process. Subordinated Wiener processes with various types of subordinators are often used as models of the evolution of stock prices and financial indexes, e.g., [19]. Strange as it may seem, the results concerning the possibility of representation of the Linnik distribution as a scale mixture of normals were never explicitly presented in the literature in full detail before [2], although the property of the Linnik distribution to be a normal scale mixture is something almost obvious. Perhaps, the paper [10] was the closest to this conclusion and exposed the representability of the Linnik law as a scale mixture of Laplace distributions with the mixing distribution written out explicitly. These results became the base for our efforts to extend them from the Linnik distribution to the multivariate generalized Linnik law and more general scale mixtures of multivariate stable distributions. Methodically, the present paper is very close to the work of L. Devroye [20] where many examples of mixture representations of popular probability distributions were discussed from the simulation point of view. The presented material substantially relies on the results of [2,5,15].
In many situations related to experimental data analysis, one often comes across the following phenomenon: although conventional reasoning based on the central limit theorem of probability theory concludes that the expected distribution of observations should be normal, instead, the statistical procedures expose the noticeable non-normality of real distributions. Moreover, as a rule, the observed non-normal distributions are more leptokurtic than the normal law, having sharper vertices and heavier tails. These situations are typical in financial data analysis (see, e.g., Chapter 4 in [19] or Chapter 8 in [21] and the references therein), in experimental physics (see, e.g., [22]) and other fields dealing with statistical analysis of experimental data. Many attempts were undertaken to explain this heavy-tailedness. Most significant theoretical breakthrough is usually associated with the results of B. Mandelbrot and others [23][24][25] who proposed, instead of the standard central limit theorem, to use reasoning based on limit theorems for sums of random summands with very large variances (also see [26,27]) resulting in non-normal stable laws as heavy-tailed models of the distributions of experimental data. However, in most cases, the key assumption within this approach, the lareg size of the variances of elementary summands, can hardly be believed to hold in practice. To overcome this contradiction, in [28], we considered an extended limit setting where it may be assumed that the intensity of the flow of informative events is random resulting in that the number of jumps up to a certain time in a random-walk-type model or the sample size is random. We show that in this extended setting, actually, heavy-tailed scale mixtures of stable laws can also be limit distributions for sums of a random number of random vectors with finite covariance matrices.
The paper is organized as follows. Section 2 contains basic notations and definitions. Some properties of univariate scale distributions are recalled in Section 3. In Section 4, we introduce multivariate stable distributions and prove a multivariate analog of the univariate 'multiplication theorem' (see Theorem 3.3.1 in [1]). In Section 5 we discuss some properties of scale-mixed multivariate elliptically contoured stable laws. In particular, we prove that these mixtures are identifiable. Section 6 contains the description of the properties of uni-and multi-variate generalized Mittag-Leffler distributions. In Section 7, we consider the multivariate generalized Linnik distribution. Here, we discuss different approaches to the definition of this distribution and prove some new mixture representations for the multivariate generalized Linnik distribution. General properties of scale-mixed multivariate stable distributions are discussed in Section 8. In Section 9, we first prove a general transfer theorem presenting necessary and sufficient conditions for the convergence of the distributions of random sequences with independent random indices (including sums of a random number of random vectors and multivariate statistics constructed from samples with random sizes) to scale mixtures of multivariate elliptically contoured stable distributions. As particular cases, conditions are obtained for the convergence of the distributions of scalar normalized random sums of random vectors with covariance matrices to scale mixtures of multivariate stable distributions and their special cases: 'pure' multivariate stable distributions and the multivariate generalized Linnik distributions. The results of this section extend and refine those proved in [29].

Basic Notation and Definitions
Let r ∈ N. We will consider random elements taking values in the r-dimensional Euclidean space R r . The Euclidean norm of a vector x ∈ R r will be denoted x . Assume that all the random variables and random vectors are defined on one and the same probability space (Ω, A, P). The distribution of a random variable Y or an r-variate random vector Y with respect to the measure P will be denoted L(Y) and L(Y), respectively. The weak convergence, the coincidence of distributions and the convergence in probability with respect to a specified probability measure will be denoted by the symbols =⇒, d = and P −→, respectively. The product of independent random elements will be denoted by the symbol •. The vector with all zero coordinates will be denoted 0.
A univariate random variable with the standard normal distribution function Φ(x) will be denoted X, Let Σ be a positive definite (r × r)-matrix. The normal distribution in R r with zero vector of expectations and covariance matrix Σ will be denoted N Σ . This distribution is defined by its density The characteristic function f (X) (t) of a random vector X such that L(X) = N Σ has the form A random variable having the gamma distribution with shape parameter r > 0 and scale parameter λ > 0 will be denoted G r,λ , where Γ(r) is Euler's gamma-function, x r−1 e −x dx, r > 0.
In this notation, obviously, G 1,1 is a random variable with the standard exponential distribution: P(G 1,1 < x) = 1 − e −x 1(x 0) (here and in what follows 1(A) is the indicator function of a set A).
The gamma distribution is a particular representative of the class of generalized gamma distributions (GG distributions), that was first described in [30] as a special family of lifetime distributions containing both gamma and Weibull distributions. A generalized gamma (GG) distribution is the absolutely continuous distribution defined by the density with α ∈ R, λ > 0, r > 0. A random variable with the density g(x; r, α, λ) will be denoted G r,α,λ . It is easy to see that Let γ > 0. The distribution of the random variable W γ : is called the Weibull distribution with shape parameter γ. It is obvious that W 1 is the random variable with the standard exponential distribution: P(W 1 < x) = 1 − e −x 1(x 0). The Weibull distribution is a particular case of GG distributions corresponding to the density g(x; 1, γ, 1). It is easy to see that In the paper [31], it was shown that any gamma distribution with shape parameter no greater than one is mixed exponential. Namely, the density g(x; r, µ) of a gamma distribution with 0 < r < 1 can be represented as Moreover, a gamma distribution with shape parameter r > 1 cannot be represented as a mixed exponential distribution.
In [32] it was proved that if r ∈ (0, 1), µ > 0 and G r, 1 and G 1−r, 1 are independent gamma-distributed random variables, then the density p(z; r, µ) defined by (2) corresponds to the random variable where V 1−r,r is the random variable with the Snedecor-Fisher distribution defined by the probability density In other words, if r ∈ (0, 1), then

Univariate Stable Distributions
Let r ∈ N. Recall that the distribution of an r-variate random vector S is called stable, if for any a, b ∈ R there exist c ∈ R and d ∈ R r such that aS 1 + bS 2 d = cS + d, where S 1 and S 2 are independent and S 1 In what follows we will concentrate our attention on a special sub-class of stable distributions called strictly stable. This sub-class is characterized by that in the definition given above d = 0.
In the univariate case, the characteristic function f(t) of a strictly stable random variable can be represented in several equivalent forms (see, e.g., [1]). For our further constructions the most convenient form is where Here α ∈ (0, 2] is the characteristic exponent, θ ∈ [−1, 1] is the skewness parameter (for simplicity we consider the "standard" case with unit scale coefficient at t). Any random variable with characteristic function (5) will be denoted S(α, θ) and the characteristic function (5) itself will be written as f α,θ (t). For definiteness, S(1, 1) = 1.
From (7) it is easy to see that S(2, 0) d = √ 2X. Univariate stable distributions are popular examples of heavy-tailed distributions. Their moments of orders δ α do not exist (the only exception is the normal law corresponding to α = 2), and if 0 < δ < α, then (see, e.g., [33]). Stable laws and only they can be limit distributions for sums of a non-random number of independent identically distributed random variables with very large variance under linear normalization. Let 0 < α 1. By S(α, 1) we will denote a positive random variable with the one-sided stable distribution corresponding to the characteristic function f α,1 (t), t ∈ R. The Laplace-Stieltjes transform ψ (S) α,1 (s) of the random variable S(α, 1) has the form The moments of orders δ α of the random variable S(α, 1) are very large and for 0 < δ < α we have (see, e.g., [33]). For more details see [27] or [1].

Multivariate Stable Distributions
Now turn to the multivariate case. By Q r we denote the unit sphere: Q r = {u ∈ R r : u = 1}. Let µ be a finite ('spectral') measure on Q r . It is known that the characteristic function of a strictly stable random vector S has the form with w( · , α) defined in (6), see [34][35][36][37]. An r-variate random vector with the characteristic function (13) will be denoted S(α, µ). We will sometimes use the notation S α,µ for L S(α, µ) . As is known, a random vector S has a strictly stable distribution with some characteristic exponent α if and only if for any u ∈ R r the random variable u S (the projection of S) has the univariate strictly stable distribution with the same characteristic exponent α and some skewness parameter θ(u) up to a scale coefficient γ(u): see [38]. Moreover, the projection parameter functions are related with the spectral measure µ as see [36][37][38]. Conversely, the spectral measure µ is uniquely determined by the projection parameter functions γ(u) and θ(u). However, there is no simple formula for this [37]. An r-variate analog of a one-sided univariate strictly stable random variable S(α, 1) is the random vector S(α, µ + ) where 0 < α 1 and µ + is a finite measure concentrated on the set Q r + = {u = (u 1 , . . . , u r ) : u i 0, i = 1, . . . , r}.
Theorem 1. Let 0 < α 2, 0 < α 1, µ be a finite measure on Q r , S(α, µ) be an r-variate random vector having the strictly stable distribution with characteristic exponent α and spectral measure µ. Then If, in addition, 0 < α < 1, and µ + is a finite measure on Q r + , then Proof. Let γ(u) and θ(u), u ∈ R r , be the projection parameter functions corresponding to the measure µ (see (15) and (16)). Then, in accordance with (9) and (14), for any u ∈ R r we have The remark that γ(u) and θ(u) uniquely determine µ concludes the proof of (17). Representation (18) is a particular case of (17).

Remark 1.
Actually, the essence of Theorem 1 is that all multivariate strictly stable distributions with α < 2 are scale mixtures of multivariate scale laws with no less characteristic exponent, the mixing distribution being univariate one-sided strictly stable law. The case α = 2 is not an exception: in this case the mixing distribution is degenerate concentrated in the unit point. This degenerate law formally satisfies the definition of a stable distribution being the only stable law that is not absolutely continuous.

Scale Mixtures of Multivariate Elliptically Contoured Stable Distributions
Let U be a nonnegative random variable. The symbol EN UΣ (·) will denote the distribution which for each Borel set A in R r is defined as It is easy to see that if X is a random vector such that L(X) = N Σ , then EN UΣ = L( √ U • X). In this notation, relation (20) can be written as By analogy, the symbol ES α,U 2/α Σ will denote the distribution that for each Borel set A in R r is defined as The characteristic function corresponding to the distribution ES α, where the random variable U is independent of the random vector S(α, Σ), that is, the distribution Let U be the set of all nonnegative random variables. Now consider an auxiliary statement dealing with the identifiability of the family of distributions {ES α,U 2/α Σ : U ∈ U }. (24) for any set A ∈ B(R r ), then U 1 Proof. The proof of this lemma is very simple. If U ∈ U , then it follows from (13) that the characteristic function v But on the right-hand side of (25) there is the Laplace-Stieltjes transform of the random variable U.
α,Σ (t) whence by virtue of (25) the Laplace-Stieltjes transforms of the random variables U 1 and U 2 coincide, whence, in turn, it follows that U 1 d = U 2 . The lemma is proved.

Remark 2.
When proving Lemma 1 we established a simple but useful by-product result: if ψ (U) (s) is the Laplace-Stieltjes transform of the random variable U, then the characteristic function v Let X be a random vector such that L(X) = N Σ with some positive definite (r × r)-matrix Σ. Define the multivariate Laplace distribution as L √ The random vector with this multivariate Laplace distribution will be denoted Λ Σ . It is well known that the Laplace-Stieltjes transform ψ (W 1 ) (s) of the random variable W 1 with the exponential distribution has the form Hence, in accordance with (27) and Remark 2, the characteristic function f (Λ)

Generalized Mittag-Leffler Distributions
We begin with the univariate case. The probability distribution of a nonnegative random variable M δ whose Laplace transform is where λ > 0, 0 < δ 1, is called the Mittag-Leffler distribution. For simplicity, in what follows we will consider the standard scale case and assume that λ = 1. The origin of the term Mittag-Leffler distribution is due to that the probability density corresponding to Laplace transform (28) has the form where E δ (z) is the Mittag-Leffler function with index δ that is defined as the power series With δ = 1, the Mittag-Leffler distribution turns into the standard exponential distribution, that is, But with δ < 1 the Mittag-Leffler distribution density has the heavy power-type tail: from the well-known asymptotic properties of the Mittag-Leffler function it can be deduced that if 0 < δ < 1, then as x → ∞, see, e.g., [39]. It is well-known that the Mittag-Leffler distribution is geometrically stable. This means that if X 1 , X 2 , . . . are independent random variables whose distributions belong to the domain of attraction of a one-sided α-strictly stable law L(S(α, 1)) and NB 1, p is the random variable independent of X 1 , X 2 , . . . and having the geometric distribution then for each p ∈ (0, 1) there exists a constant a p > 0 such that a p X 1 + . . . + X NB 1, p =⇒ M δ as p → 0, see, e.g., [40]. The history of the Mittag-Leffler distribution was discussed in [2]. For more details see e.g., [2,3] and the references therein. The Mittag-Leffler distributions are of serious theoretical interest in the problems related to thinned (or rarefied) homogeneous flows of events such as renewal processes or anomalous diffusion or relaxation phenomena, see [41,42] and the references therein.
Let ν > 0, δ ∈ (0, 1]. It can be easily seen that the Laplace transform ψ (28)) is greatly divisible. Therefore, any its positive power is a Laplace transform, and, moreover, is greatly divisible as well. The distribution of a nonnegative random variable M δ, ν defined by the Laplace-Stieltjes transform is called the generalized Mittag-Leffler distribution, see [43,44] and the references therein. Sometimes this distribution is called the Pillai distribution [20], although in the original paper [18] R. Pillai called it semi-Laplace. In the present paper we will keep to the first term generalized Mittag-Leffler distribution.
In [4] it was demonstrated that the generalized Mittag-Leffler distribution can be represented as a scale mixture of 'ordinary' Mittag-Leffler distributions: if ν ∈ (0, 1] and δ ∈ (0, 1], then In [4] it was also shown that any generalized Mittag-Leffler distribution is a scale mixture a one-sided stable law with any greater characteristic parameter, the mixing distribution being the generalized Mittag-Leffler law: if δ ∈ (0, 1], δ ∈ (0, 1) and ν > 0, then Now turn to the multivariate case. As the starting point for our consideration we take representation (31). The nearest aim is to obtain its multivariate generalization. Let S(α, µ) be a strictly stable random vector with α = 1. Consider the characteristic function h α,ν,µ (t) of the random vector G 1/α ν,1 • S(α, µ). From (13) and (6) we have That is, from (1) and (35) we obtain the following result.
Now, comparing the right-hand side of (35) with (30) we can conclude that, if G ν,α,1 is the random variable with the generalized gamma distribution with parameters ν > 0, 0 < α 2, α = 1, λ = 1, and S(α, µ + ) is a random vector with the one-sided strictly stable distribution with characteristic exponent α ∈ (0, 1) and spectral measure µ + concentrated on Q r + , then we have all grounds to call the distribution of the random vector G ν,α,1 • S(α, µ + ) multivariate generalized Mittag-Leffler distribution with parameters α, ν and µ + . To provide the possibility to consider the univariate generalized Mittag-Leffler distribution as a particular case of a more general multivariate definition, here we use the measure µ + and α ∈ (0, 1) characterising the "one-sided" stable law, although from the formal viewpoint this is not obligatory. Moreover, as we will see below, in the multivariate case the (generalized) Mittag-Leffler distribution can be regarded as a special case of the (generalized) Linnik law defined in the same way but with µ and α ∈ (0, 2].

Generalized Linnik Distributions
In 1953 Yu. V. Linnik [46] introduced a class of symmetric distributions whose characteristic functions have the form f (L) where α ∈ (0, 2]. The distributions with the characteristic function (38) are traditionally called the Linnik distributions. Although sometimes the term α-Laplace distributions [18] is used, we will use the first term which has already become conventional. If α = 2, then the Linnik distribution turns into the Laplace distribution corresponding to the density A random variable with density (39) will be denoted Λ. A random variable with the Linnik distribution with parameter α will be denoted L α .
Perhaps, most often Linnik distributions are recalled as examples of symmetric geometric stable distributions. This means that if X 1 , X 2 , . . . are independent random variables whose distributions belong to the domain of attraction of an α-strictly stable symmetric law and NB 1, p is the random variable independent of X 1 , X 2 , . . . and having the geometric distribution (29), then for each p ∈ (0, 1) there exists a constant a p > 0 such that a p X 1 + . . . + X NB 1, p =⇒ L α as p → 0, see, e.g., [47] or [40].
The properties of the Linnik distributions were studied in many papers. We should mention [7][8][9]48] and other papers, see the survey in [2].
In [2,7] it was demonstrated that where the random variable M α/2 has the Mittag-Leffler distribution with parameter α/2. The multivariate Linnik distribution was introduced by D. N. Anderson in [49] where it was proved that the function is the characteristic function of an r-variate probability distribution, where Σ is a positive definite (r × r)-matrix. In [49] the distribution corresponding to the characteristic function (41) was called the r-variate Linnik distribution. For the properties of these distributions see [16,49]. To distinguish from the general case, in what follows, the distribution corresponding to characteristic function (41) will be called multivariate (centered) elliptically contoured Linnik distribution.
The r-variate elliptically contoured Linnik distribution can also be defined in another way. Let X be a random vector such that L(X) = N Σ , where Σ is a positive definite (r × r)-matrix, independent of the random variable M α/2 . By analogy with (40) introduce the random vector L α,Σ as Then, in accordance with what has been said in Section 5, Using Remark 1 we can easily make sure that the two definitions of the multivariate elliptically contoured Linnik distribution coincide. Indeed, with the account of (28), according to Remark 2, the characteristic function of the random vector L α,Σ defined by (42) has the form that coincides with Anderson's definition (41).
Based on (40), one more equivalent definition of the multivariate elliptically contoured Linnik distribution can be proposed. Namely, let L α,Σ be an r-variate random vector such that In accordance with (27) and Remark 2 the characteristic function of the random vector L α,Σ defined by (43) again has the form The definitions (42) and (43) open the way to formulate limit theorems stating that the multivariate elliptically contoured Linnik distribution can not only be limiting for geometric random sums of independent identically distributed random vectors with very large second moments [50], but it also can be limiting for random sums of independent random vectors with finite covariance matrices.
It can be easily seen that the characteristic function f (L) α (t) (see (38)) is very largely divisible. Therefore, any its positive power is a characteristic function and, moreover, is also very largely divisible. In [17], Pakes showed that the probability distributions known as generalized Linnik distributions which have characteristic functions play an important role in some characterization problems of mathematical statistics. The class of probability distributions corresponding to characteristic function (44) have found some interesting properties and applications, see [6,7,[10][11][12]14,51,52] and related papers. In particular, they are good candidates to model financial data which exhibits high kurtosis and heavy tails [53]. Any random variable with the characteristic function (44) will be denoted L α,ν . Recall some results containing mixture representations for the generalized Linnik distribution. The following well-known result is due to Devroye [7] and Pakes [17] who showed that for any α ∈ (0, 2] and ν > 0. It is well known that for γ > −ν. Hence, for 0 β < α from (8) and (45) we obtain .
Generalizing and improving some results of [15,17], with the account of (31) in [5] it was demonstrated that for ν > 0 and α ∈ (0, 2] that is, the generalized Linnik distribution is a normal scale mixture with the generalized Mittag-Leffler mixing distribution.
For δ ∈ (0, 1] denote where S(δ, 1) and S (δ, 1) are independent random variables with one and the same one-sided stable distribution with the characteristic exponent δ. In [2] it was shown that the probability density f (R) δ (x) of the ratio R δ of two independent random variables with one and the same one-sided strictly stable distribution with parameter δ has the form , x > 0, also see [1], Section 3.3, where it was hidden among other calculations, but was not stated explicitly. In [5] it was proved that if ν ∈ (0, 1] and α ∈ (0, 2], then So, the density of the univariate generalized Linnik distribution admits a simple integral representation via known elementary densities (2), (39) and (45).
As concerns the property of geometric stability, the following statement holds.

Lemma 3. Any univariate symmetric random variable Y α is geometrically stable if and only if it is representable as
Any univariate positive random variable Y α is geometrically stable if and only if it is representable as Proof. These representations immediately follow from the definition of geometrically stable distributions and the transfer theorem for cumulative geometric random sums, see, e.g., [54].

Corollary 1.
If ν = 1, then from the identifiability of scale mixtures of stable laws (see Lemma 1) it follows that the generalized Linnik distribution and the generalized Mittag-Leffler distributions are not geometrically stable.
Let Σ be a positive definite (r × r)-matrix, α ∈ (0, 2], ν > 0. As the 'ordinary' multivariate Linnik distribution, the multivariate elliptically contoured generalized Linnik distribution can be defined in at least two equivalent ways. First, it can be defined by its characteristic function. Namely, a multivariate distribution is called (centered) elliptically contoured generalized Linnik law, if the corresponding characteristic function has the form Second, let X be a random vector such that L(X) = N Σ , independent of the random variable M α/2,ν with the generalized Mittag-Leffler distribution. By analogy with (46), introduce the random vector L α,ν,Σ as L α,ν,Σ = 2M α/2,ν • X.
Then, in accordance with what has been said in Section 5, The distribution (42) will be called the multivariate (centered) elliptically contoured generalized Linnik distribution.
Using Remark 2 we can easily make sure that the two definitions of the multivariate elliptically contoured generalized Linnik distribution coincide. Indeed, with the account of (30), according to Remark 2, the characteristic function of the random vector L α,ν,Σ defined by (48) has the form that coincides with (47).
It is well known that the Laplace-Stieltjes transform ψ (G) ν,1 (s) of the random variable G ν,1 having the gamma distribution with the shape parameter ν has the form Then in accordance with Remark 1 the characteristic function of the random vector L α,ν,Σ defined by (49) again has the form Definitions (48) and (49) open the way to formulate limit theorems stating that the multivariate elliptically contoured generalized Linnik distribution can be limiting both for random sums of independent identically distributed random vectors with very large second moments, and for random sums of independent random vectors with finite covariance matrices.
There are some different ways of generalization of the univariate symmetric Linnik and generalized Linnik laws to the asymmetric case. The traditional (and formal) approach to the asymmetric generalization of the Linnik distribution (see, e.g., [15,55,56]) consists in the consideration of geometric sums of random summands whose distributions are attracted to an asymmetric strictly stable distribution. The variances of such summands are very large. Since in modeling real phenomena, as a rule, there are no solid reasons to reject the assumption of the finiteness of the variances of elementary summands, in [57], two alternative asymmetric generalizations were proposed based on the representability of the Linnik distribution as a scale mixture of normal laws or a scale mixture of Laplace laws.
Nevertheless, for our purposes it is convenient to deal with the traditional asymmetric generalization of the generalized Linnik distribution. Let S(α, θ) be a random variable with the strictly stable distribution defined by the characteristic exponent α ∈ (0, 2] and asymmetry parameter θ ∈ [−1, 1], G ν,α,1 be a random variable having the GG distribution with shape parameter ν > 0 and exponent power parameter α independent of S(α, θ). Based on representation (45), we define the asymmetric generalized Linnik distribution as L G ν,α,1 • S(α, θ) . A random variable with this distribution will be denoted L α,ν,θ .
In the multivariate case a natural way of construction of the asymmetric Linnik laws consists in the application of Lemma 2 with not necessarily elliptically contoured strictly stable distribution. Namely, let the random variable G ν,α,1 have the generalized gamma distribution and be independent of the random vector S(α, µ) with the strictly stable distribution with characteristic exponent α ∈ (0, 2] and spectral measure µ. Extending the definitions of multivariate elliptically contoured generalized Linnik distribution given above, we will say that the distribution of the random vector G ν,α,1 • S(α, µ) is the multivariate generalized Linnik distribution. Formally, this definition embraces both multivariate elliptically contoured generalized Linnik laws and, moreover, multivariate generalized Mittag-Leffler laws (if µ = µ + ). A random vector with the multivariate generalized Linnik distribution will be denoted L α,ν,µ .
If ν = 1, then we have the 'ordinary' multivariate Linnik distribution. By definition, L α,1,µ = L α,µ . Mixture representations for the generalized Mittag-Leffler distribution were considered in [5] and discussed in Section 6 together with their extensions to the multivariate case. Here, we will focus on the mixture representations for the multivariate generalized Linnik distribution. Our reasoning is based on the definition of the multivariate generalized Linnik distribution given above and Theorem 1.

Consider projections of a random vector with the multivariate generalized Linnik distribution.
For an arbitrary u ∈ R r we have This means that the following statement holds. Theorem 5. Let the random vector L α,ν,µ have the multivariate generalized Linnik distribution with α ∈ (0, 2], ν > 0 and spectral measure µ. Let to the spectral measure µ there correspond the projection scale parameter function γ(u) and projection asymmetry parameter function θ(u), u ∈ R r . Then any projection of the random vector L α,ν,µ has the univariate asymmetric Linnik distribution with the asymmetry parameter θ(u) scaled by γ(u): Now consider the elliptically contoured case. Let α ∈ (0, 2] and the random vector Λ Σ have the multivariate Laplace distribution with some positive definite (r × r)-matrix Σ. In [33] it was shown that if δ ∈ (0, 1], then Hence, it can be easily seen that So, from Theorem 4 and (52) we obtain the following statement.

General Scale-Mixed Stable Distributions
In the preceding sections we considered special scale-mixed stable distributions in which the mixing distribution was generalized gamma leading to popular Mittag-Leffler and Linnik laws. Now turn to the case where the mixing distribution can be arbitrary.
Let α ∈ (0, 2], let U be a positive random variable and S(α, µ) be a random vector with the strictly stable distribution defined by the characteristic exponent α and spectral measure µ. An r-variate random vector Y α,µ is said to have the U-scale-mixed stable distribution, if Correspondingly, for 0 < α 1, a univariate positive random variable Y α,1 is said to have the U-scale-mixed one-sided stable distribution, if Y α,1 is representable as As above, in the elliptically contoured case, where to the spectral measure µ there corresponds a positive definite (r × r)-matrix Σ, instead of Y α,µ we will write Y α,Σ .
Theorem 6. Let U be a positive random variable, α ∈ (0, 2], α ∈ (0, 1]. Let S(α, µ) be a random vector with the strictly stable distribution defined by the characteristic exponent α and spectral measure µ. Let an r-variate random vector Y αα ,Σ have the U-scale-mixed stable distribution and a random variable Y α ,1 have the U-scale-mixed one-sided stable distribution. Assume that S α,µ and Y α ,1 are independent. Then Proof. From the definition of a U-scale-mixed stable distribution and (17) we have In the elliptically contoured case with α = 2, from Theorem 6 we obtain the following statement.

Corollary 4.
Let α ∈ (0, 2), U be a positive random variable, Σ be a positive definite (r × r)-matrix, X be a random vector such that L(X) = N Σ . Then In other words, any multivariate scale-mixed symmetric stable distribution is a scale mixture of multivariate normal laws. On the other hand, since the normal distribution is stable with α = 2, any multivariate normal scale mixture is a 'trivial' multivariate scale-mixed stable distribution.
To give particular examples of 'non-trivial' scale-mixed stable distributions, note that Among possible mixing distributions of the random variable U, we will distinguish a special class that can play important role in modeling observed regularities by heavy-tailed distributions. Namely, assume that V is a positive random variable and let that is, the distribution of U is a scale mixture of gamma distributions. We will denote the class of these distributions as G (V) . This class is rather wide and besides the gamma distribution and its particular cases (exponential, Erlang, chi-square, etc.) with exponentially fast decreasing tail, contains, for example, Pareto and Snedecor-Fisher laws with power-type decreasing tail. In the last two cases the random variable V is assumed to have the corresponding gamma and inverse gamma distributions, respectively.
For L(U) ∈ G (V) we have This means that with L(U) ∈ G (V) , the U-scale-mixed stable distributions are scale mixtures of the generalized Mittag-Leffler and multivariate generalized Linnik laws.
Therefore, we pay a special attention to mixture representations of the generalized Mittag-Leffler and multivariate generalized Linnik distributions. These representations can be easily extended to any U-scale-mixed stable distributions with L(U) ∈ G (V) .

Convergence of the Distributions of Random Sequences with Independent Indices to Multivariate Scale-Mixed Stable Distributions
In applied probability it is a convention that a model distribution can be regarded as well-justified or adequate, if it is an asymptotic approximation, that is, if there exists a rather simple limit setting (say, schemes of maximum or summation of random variables) and the corresponding limit theorem in which the model under consideration manifests itself as a limit distribution. The existence of such limit setting can provide a better understanding of real mechanisms that generate observed statistical regularities, see e.g., [54].
In this section we will prove some limit theorems presenting necessary and sufficient conditions for the convergence of the distributions of random sequences with independent random indices (including sums of a random number of random vectors and multivariate statistics constructed from samples with random sizes) to scale mixtures of multivariate elliptically contoured stable distributions. As particular cases, conditions will be obtained for the convergence of the distributions of random sums of random vectors with both very large and finite covariance matrices to the multivariate generalized Linnik distribution.
Consider a sequence {S n } n 1 of random elements taking values in R r . Let Ξ(R r ) be the set of all nonsingular linear operators acting from R r to R r . The identity operator acting from R r to R r will be denoted I r . Assume that there exist sequences {B n } n 1 of operators from Ξ(R r ) and {a n } n 1 of elements from R r such that where Q is a random element whose distribution with respect to P will be denoted H, H = L(Q). Along with {S n } n 1 , consider a sequence of integer-valued positive random variables {N n } n 1 such that for each n 1 the random variable N n is independent of the sequence {S k } k 1 . Let c n ∈ R r , D n ∈ Ξ(R r ), n 1. Now we will formulate sufficient conditions for the weak convergence of the distributions of the random elements Z n = D −1 n (S N n − c n ) as n → ∞. For g ∈ R r denote W n (g) = D −1 n (B N n g + a N n − c n ). By measurability of a random field we will mean its measurability as a function of two variates, an elementary outcome and a parameter, with respect to the Cartesian product of the σ-algebra A and the Borel σ-algebra B(R r ) of subsets of R r .
In [58,59] the following theorem was proved which establishes sufficient conditions of the weak convergence of multivariate random sequences with independent random indices under operator normalization. Theorem 7. [58,59]. Let D −1 n → ∞ as n → ∞ and let the sequence of random variables { D −1 n B N n } n 1 be tight. Assume that there exist a random element Q with distribution H and an r-dimensional random field W(g), g ∈ R r , such that (53) holds and W n (g) =⇒ W(g) (n → ∞) for H-almost all g ∈ R r . Then the random field W(g) is measurable, linearly depends on g and where the random field W( · ) and the random element Q are independent. Now consider a special case of the general limit setting and assume that the normalization is scalar and the limit random vector Q in (53) has an elliptically contoured stable distribution. Namely, let {b n } n 1 be an very largely increasing sequence of positive numbers and, instead of the general condition (53) assume that as n → ∞, where α ∈ (0, 2] and Σ is some positive definite matrix. In other words, let Let {d n } n 1 be an very largely increasing sequence of positive numbers. As Z n take the scalar normalized random vector The following result can be considered as a multivariate generalization of the main theorem of [29]. Theorem 8. Let N n → ∞ in probability as n → ∞. Assume that the random vectors S 1 , S 2 , . . . satisfy condition (54) with α ∈ (0, 2] and a positive definite matrix Σ. Then a distribution F such that exists if and only if there exists a distribution function V(x) satisfying the conditions Proof. The 'if' part. We will essentially exploit Theorem 7. For each n 1 let a n = c n = 0, B n = b 1/α n I r , D n = d 1/α n I r . Let U be a random variable with the distribution function V(x). Note that the conditions of the theorem guarantee the tightness of the sequence of random variables implied by its weak convergence to the random variable U 1/α . Further, in the case under consideration we have W n (g) = (b N n /d n ) 1/α · g, g ∈ R r . Therefore, the condition b N n /d n =⇒ U implies W n (g) =⇒ U 1/α g for all g ∈ R r . Condition (54) means that in the case under consideration H = S α,Σ . Hence, by Theorem 7 Z n =⇒ U 1/α • S(α, Σ) (recall that the symbol • stands for the product of independent random elements). The distribution of the random element U 1/α • S(α, Σ) coincides with ES α,U 2/α Σ , see Section 5.
The 'only if' part. Let condition (55) hold. Make sure that the sequence { D −1 n B N n } n 1 is tight. Let Q d = S(α, Σ). There exist δ > 0 and ρ > 0 such that For ρ specified above and an arbitrary x > 0 we have (the last equality holds since any constant is independent of any random variable). Since by (54) the convergence b −1/α k S k =⇒ Y takes place as k → ∞, from (56) it follows that there exists a number for all k > k 0 . Therefore, continuing (57) we obtain Hence, From the condition N n P −→ ∞ as n → ∞ it follows that for any > 0 there exists an n 0 = n 0 ( ) such that P(N n n 0 ) < for all n n 0 . Therefore, with the account of the tightness of the sequence {Z n } n 1 that follows from its weak convergence to the random element Z with L(Z) = F implied by (55), relation (58) implies lim x→∞ sup n n 0 ( ) whatever > 0 is. Now assume that the sequence is not tight. In that case there exists an γ > 0 and sequences N of natural and {x n } n∈N of real numbers satisfying the conditions x n ↑ ∞ (n → ∞, n ∈ N ) and But, according to (59), for any > 0 there exist M = M( ) and n 0 = n 0 ( ) such that sup n n 0 ( ) Choose < γ/2 where γ is the number from (60). Then for all n ∈ N large enough, in accordance with (60), the inequality opposite to (61) must hold. The obtained contradiction by the Prokhorov theorem proves the tightness of the sequence { D −1 n B N n } n 1 or, which in this case is the same, of the sequence {b N n /d n } n 1 .
Introduce the set W (Z) containing all nonnegative random variables U such that P(Z ∈ A) = ES α,U 2/α Σ (A) for any A ∈ B(R r ). Let λ(·, ·) be any probability metric that metrizes weak convergence in the space of r-variate random vectors, or, which is the same in this context, in the space of distributions, say, the Lévy-Prokhorov metric. If X 1 and X 2 are random variables with the distributions F 1 and F 2 respectively, then we identify λ(X 1 , X 2 ) and λ(F 1 , F 2 )). Show that there exists a sequence of random variables {U n } n 1 , U n ∈ W (Z), such that Denote Prove that β n → 0 as n → ∞. Assume the contrary. In that case β n δ for some δ > 0 and all n from some subsequence N of natural numbers. Choose a subsequence N 1 ⊆ N so that the sequence {b N n /d n } n∈N 1 weakly converges to a random variable U (this is possible due to the tightness of the family {b N n /d n } n 1 established above). But then W n (g) =⇒ U 1/α g as n → ∞, n ∈ N 1 for any g ∈ R r . Applying Theorem 7 to n ∈ N 1 with condition (54) playing the role of condition (53), we make sure that U ∈ W (Z), since condition (55) provides the coincidence of the limits of all weakly convergent subsequences. So, we arrive at the contradiction to the assumption that β n δ for all n ∈ N 1 . Hence, β n → 0 as n → ∞.
For any n = 1, 2, . . . choose a random variable U n from W (Z) satisfying the condition λ b N n /d n , U n β n + 1 n .
This sequence obviously satisfies condition (62). Now consider the structure of the set W (Z). This set contains all the random variables defining the family of special mixtures of multivariate centered elliptically contoured stable laws considered in Lemma 1, according to which this family is identifiable. So, whatever a random element Z is, the set W (Z) contains at most one element. Therefore, actually condition (62) is equivalent to that is, to condition (iii) of the theorem. The theorem is proved. Moreover, in this case Σ = c 2/α Σ.
This statement immediately follows from Theorem 8 with the account of Lemma 1. Now consider convergence of the distributions of random sums of random vectors to special scale-mixed multivariate elliptically contoured stable laws.
In Section 4 (see (20)) we made sure that all scale-mixed centered elliptically contoured stable distributions are representable as multivariate normal scale mixtures. Together with Theorem 8 this observation allows to suspect at least two principally different limit schemes in which each of these distributions can appear as limiting for random sums of independent random vectors. We will illustrate these two cases by the example of the multivariate generalized Linnik distribution.
As we have already mentioned, 'ordinary' Linnik distributions are geometrically stable. Geometrically stable distributions are only possible limits for the distributions of geometric random sums of independent identically distributed random vectors. As this is so, the distributions of the summands belong to the domain of attraction of the multivariate strictly stable law with some characteristic exponent α ∈ (0, 2] and hence, for 0 < α < 2 the univariate marginals have very large moments of orders greater or equal to α. As concerns the case α = 2, where the variances of marginals are finite, within the framework of the scheme of geometric summation in this case the only possible limit law is the multivariate Laplace distribution. Correspondinly, as we will demonstrate below, the multivariate generalized Linnik distributions can be limiting for negative binomial sums of independent identically distributed random vectors. Negative binomial random sums turn out to be important and adequate models of characteristics of precipitation (total precipitation volume, etc.) during wet (rainy) periods in meteorology [60][61][62]. However, in this case the summands (daily rainfall volumes) also must have distributions from the domain of attraction of a strictly stable law with some characteristic exponent α ∈ (0, 2] and hence, with α ∈ (0, 2), have very large variances, that seems doubtful, since to have an very large variance, the random variable must be allowed to take arbitrarily large values with positive probabilities. If α = 2, then the only possible limit distribution for negative binomial random sums is the so-called variance gamma distribution which is well known in financial mathematics [54].
However, when the (generalized) Linnik distributions are used as models of statistical regularities observed in real practice and an additive structure model is used of type of a (stopped) random walk for the observed process, the researcher cannot avoid thinking over the following question: which of the two combinations of conditions can be encountered more often: • the distribution of the number of summands (the number of jumps of a random walk) is asymptotically gamma (say, negative binomial), but the distributions of summands (jumps) have so heavy tails that, at least, their variances are very large, or • the second moments (variances) of the summands (jumps) are finite, but the number of summands exposes an irregular behavior so that its very large values are possible?
Since, as a rule, when real processes are modeled, there are no serious reasons to reject the assumption that the variances of jumps are finite, the second combination at least deserves a thorough analysis.
As it was demonstrated in the preceding sections, the scale-mixed multivariate elliptically contoured stable distributions (including multivariate (generalized) Linnik laws) even with α < 2 can be represented as multivariate normal scale mixtures. This means that they can be limit distributions in analogs of the central limit theorem for random sums of independent random vectors with finite covariance matrices. Such analogs with univariate 'ordinary' Linnik limit distributions were presented in [2] and extended to generalized Linnik distributions in [5]. In what follows we will present some examples of limit settings for random sums of independent random vectors with principally different tail behavior. In particular, it will de demonstrated that the scheme of negative binomial summation is far not the only asymptotic setting (even for sums of independent random variables!) in which the multivariate generalized Linnik law appears as the limit distribution.

Remark 3.
Based on the results of [63], by an approach that slightly differs from the one used here by the starting point, in the paper [64] it was demonstrated that if the random vectors {S n } n 1 are formed as cumulative sums of independent random vectors: for n ∈ N, where X 1 , X 2 , . . . are independent r-valued random vectors, then the condition N n P −→ ∞ in the formulations of Theorem 8 and Corollary 4 can be omitted.
Throughout this section we assume that the random vectors S n have the form (63). Let U ∈ U (see Section 5), α ∈ (0, 2], Σ be a positive definite matrix. In Section 8 the r-variate random vector Y α,Σ with the the multivariate U-scale-mixed elliptically contoured stable distribution was introduced as Y α,Σ = U 1/α • S(α, Σ). In this section we will consider the conditions under which multivariate U-scale-mixed stable distributions can be limiting for sums of independent random vectors.
Consider a sequence of integer-valued positive random variables {N n } n 1 such that for each n 1 the random variable N n is independent of the sequence {S k } k 1 . First, let {b n } n 1 be an very largely increasing sequence of positive numbers such that convergence (54) takes place. Let {d n } n 1 be an very largely increasing sequence of positive numbers. The following statement presents necessary and sufficient conditions for the convergence Proof. This theorem is a direct consequence of Theorem 8 and the definition of Y α,Σ with the account of Remark 3.
Proof. To prove this statement it suffices to notice that the multivariate generalized Linnik distribution is a U-scale-mixed stable distribution with U d = G ν,1 (see representation (49)) and refer to Theorem 9 with the account of Remark 3.
Condition (65) holds, for example, if b n = d n = n, n ∈ N, and the random variable N n has the negative binomial distribution with shape parameter ν > 0, that is, N n = NB ν,p n , with p n = n −1 (see, e.g., [65,66]). In this case ENB ν,p n = nν.
Recall that in Section 8, for α ∈ (0, 1] the positive random variable Y α ,1 with the univariate one-sided U-scale-mixed stable distribution was introduced as Y α ,1 d = U 1/α • S(α , 1). Proof. This statement directly follows from Theorems 5 and 7 with the account of Remark 3. Proof. This statement directly follows from Theorems 3 (see representation (50)) and 9 with the account of Remark 3.
From the case of heavy tails turn to the 'light-tails' case where in (54) α = 2. In other words, assume that the properties of the summands X j provide the asymptotic normality of the sums S n . More precisely, instead of (54), assume that b −1/2 n S n =⇒ X (n → ∞).
The following results show that even under condition (66), heavy-tailed U-scale-mixed multivariate stable distributions can be limiting for random sums. Proof. This theorem is a direct consequence of Theorem 8 and Corollary 4, according to which Y α,Σ d = 2Y α/2,1 • X with the account of Remark 3.
Proof. This statement follows from Theorem 11 with the account of (13) and Remark 3. Proof. To prove this statement it suffices to notice that the multivariate generalized Linnik distribution is a multivariate normal scale mixture with the generalized Mittag-Leffler mixing distribution (see definition (48)) and refer to Theorem 11 with the account of Remark 3. Another way to prove Corollary 9 is to deduce it from Corollary 7.
Product representations for limit distributions in these theorems proved in the preceding sections allow to use other forms of the conditions for the convergence of random sums of random vectors to particular scale mixtures of multivariate stable laws.

Conclusions
In this paper, multivariate probability distributions were considered that are representable as scale mixtures of multivariate stable distributions. Multivariate analogs of the Mittag-Leffler distribution were introduced. Some properties of these distributions were discussed. Attention was paid to the representations of the corresponding random vectors as products of independent random variables and vectors. In these products, relations were traced of the distributions of the involved terms with popular probability distributions. As examples of distributions of the class of scale mixtures of multivariate stable distributions, multivariate generalized Linnik distributions and multivariate generalized Mittag-Leffler distributions were considered in detail. Their relations with multivariate 'ordinary' Linnik distributions, multivariate normal, stable and Laplace laws as well as with univariate Mittag-Leffler and generalized Mittag-Leffler distributions were discussed. Limit theorems were proved presenting necessary and sufficient conditions for the convergence of the distributions of random sequences with independent random indices (including sums of a random number of random vectors and multivariate statistics constructed from samples with random sizes) to scale mixtures of multivariate elliptically contoured stable distributions. The property of scale-mixed multivariate elliptically contoured stable distributions to be both scale mixtures of a non-trivial multivariate stable distribution and a normal scale mixture was used to obtain necessary and sufficient conditions for the convergence of the distributions of random sums of random vectors with covariance matrices to the multivariate generalized Linnik distribution.
The key points of the paper are: • analogs of the multiplication theorem for stable laws were proved for scale-mixed multivariate stable distributions relating these laws with different parameters; • some alternative but equivalent definitions are proposed for the generalized multivariate Linnik distributions based on their property to be scale-mixed multivariate stable distributions; • The multivariate analog of the (generalized) Mittag-Leffler distribution was introduced and it was noticed that the multivariate (generalized) Mittag-Leffler distribution can be regarded as a special case of the multivariate (generalized) Linnik distribution; • new mixture representations were presented for the multivariate generalized Mittag-Leffler and Linnik distributions; • a general transfer theorem was proved establishing necessary and sufficient conditions for the convergence of the distributions of sequences of multivariate random vectors with independent random indices (including sums of a random number of random vectors and multivariate statistics constructed from samples with random sizes) to multivariate elliptically contoured scale-mixed stable distributions; • the property of scale-mixed multivariate elliptically contoured stable distributions to be both scale mixtures of a non-trivial multivariate stable distribution and a normal scale mixture was used to obtain necessary and sufficient conditions for the convergence of the distributions of random sums of random vectors to the multivariate elliptically contoured generalized Linnik distribution in covariance matrices.