Estimation of mean form and mean form difference under elliptical laws

Some ideas studied by Lele (1993), under a Gaussian perturbation model, are generalised in the setting of matrix multivariate elliptical distributions. In particular, several inaccuracies in the published statistical perturbation model are revised. In addition, a number of aspects about identifiability and estimability are also considered. Instead of using the Euclidean distance matrix for proposing consistent estimates, this paper determines exact formulae for the moments of matrix $\mathbf{B} = \mathbf{X}^{c}\left(\mathbf{X}^{c}\right)^{T}$, where $\mathbf{X}^{c}$ is the centered landmarks matrix. Consistent estimation of mean form difference under elliptical laws is also studied. Finally, the main results of the paper and some methodologies for selecting models and hypothesis testing are applied to a real landmark data. comparing correlation shape structure is proposed and applied in handwritten differentiation.


Introduction
Statistical theory of shape is a versatile technique of classification and comparison of "objects" in several disciplines. It can be set in terms of matrix variate theory, then a plenty of distributional results are available for applications. However, some strong problems appear: the use of asymptotic distributions, tangent plane inference, isotropic models, Gaussian assumptions, etc.; see [16]. In particular, the implementation of procrustes theory has received critics from experts on morphometrics and related fields, see [30].
Each approach models the shape of the objects under geometrical transformations. The methods give some invariants of interest for applications. For example: 1. affine transformation removes any geometrical information of rotation, translation, scaling and uniform shear; 2. similarity transformations via QR, SVD or Pseudo-Wishart remove rotation, translation and scaling. 3. projective shape removes affine effect and projection.
About the computation in the likelihood estimation, some affine densities can be reduced to polynomials of low degree; then the associated inference gives robust estimation of location and scale population parameters.
However, likelihood estimation with similarity shape distributions is a difficult task, because it demands the computation of infinite series of Jack polynomials. A common computational practice considers isotropic models, but this assumption is unrealistic in applications. Users of Euclidean shape theory expect estimation of the correlation structure of the landmarks, in order to provide a full description of the objects, see [32].
Instead of the likelihood approach, some authors have proposed the methodof-moments estimators under Gaussian models. The technique provides a computable and consistent estimation of mean form and variance-covariance structure, see [30], [31], [39] and the references therein. In fact, [30] showed that the addressed procrustes analysis yield inconsistent estimators of mean form as well as shape. He also proved that variance-covariance parameters are nonidentifiable under this analysis. [41] also reported the inability of procrustes methods to estimate the correct variance-covariance structure. It is important to note that procrustes analysis is a widely used method for shape estimation in several fields, see [16].
Thus the method-of-moments estimators can be considered as a promising technique in shape theory. Some studies can include: a revision of the Gaussian case given in [30]; a generalization in the context of elliptical laws, model selection criteria and shape hypothesis testing.
With the modified Gaussian case of [30], we can establish a connection with the theoretical studies of [33], [36] and [11]. And then we can provide a unified approach in the general framework of the matrix variate elliptical shape theory.
The above discussion is placed in this work as follows: Section 2 clarifies some results of the published Gaussian case and proposes a generalization in the context of matrix variate elliptical distributions. The section includes: the identifiability and estimability of the parameters; the perturbation model under a matrix variate elliptical distribution; the invariance and the nuisance parameters. Section 3 studies the consistent estimation of the population parameters under dependence and independence. It also gives exact formulae for the mo-ments estimators. Section 4 provides a consistent estimation for a general nonnegative definite correlation matrix. Section 5 gives extensions of form difference under the Euclidean Distance Matrix and elliptical models. Finally, Section 6 gives a complete example with the main results of the paper. It also proposes a model selection criteria.

Preliminary results
In this section we review some Gaussian distributional results of [30]. Then the statistical model of the paper is proposed under a general matrix variate elliptical distribution.

Matrix variate elliptical distribution
A detailed discussion of matrix variate elliptical distributions can be found in [19] and [23].
Remark 2.1. There are two definitions for a matrix variate Gaussian distribution. The first one is written as see [1], [18] and [19]. In general, the estimation of Σ or Θ is not possible, but the Kronecker product Σ ⊗ Θ (or Θ ⊗ Σ) is identifiable, see [30], [18]. Then, given that Cov(vec Y) = Θ⊗Σ, and Cov(vec Y T ) = Σ⊗Θ, a number of authors use the alternative notation, where "vec" denotes the vectorization operator, see [36] and [23]. The same situation appears in the matrix variate elliptical case; in this paper we will follow the second notation. Definition 2.1. The K × D random matrix Y is said to have a matrix variate elliptical distribution, with location parameter μ ∈ K×D and scale parameter Σ ⊗ Θ ∈ KD×KD , if its density function with respect to the Lebesgue measure (dY) is given by Here Σ ∈ K×K and Θ ∈ D×D are positive definite matrices, denoted by Σ > 0 and Θ > 0. The function h : → [0, ∞) is termed the density generator and it is required that We collect the above definition in the notation Y ∼ E K×D (μ, Σ ⊗ Θ, h). Finally, the characteristic function ψ Y (T) of Y is given by
then it is easy to check that see [19]. Thus, We must point out that the last two asseverations are incorrectly stated in [30] under the context of a perturbation model. Finally, note that the matrix variate elliptical distributions generalize the Gaussian case. They include the contaminated Gaussian, Pearson type II and VII, Kotz, Jensen-Logistic, power exponential and Bessel distributions, among others; and these distributions have tails that are more or less weighted, and/or present a greater or smaller degree of kurtosis than the Gaussian distribution.

Identifiability and estimability of the parameters
In this section we study the identifiability and estimability of the parameters μ and Σ ⊗ Θ.
First observe that the density (2.1) can be written as  (vec μ, Ξ, h). Now, assume that our data consist of a sample of matrices Y 1 , Y 2 , . . . , Y n from a given population. Define the random matrix and suppose that Y 1 , Y 2 , . . . , Y n are independent. Then, by [11], the density function of Y is In this case, λ max is the critical point where h * (λ) attains its maximum, and where H n = I n − 1 n 1 n 1 T n defines an orthogonal projection, i.e., H n = H T n = H 2 n . Alternativelȳ then the estimator of μ is  [13] and [15]. We also note that the maximum likelihood estimators under singular matrix variate elliptical models follow the same rules of the singular Gaussian case, see [26] and [38,Section 8a.5,].

Perturbation model under a matrix variate elliptical distribution
Assume that the random matrix X ∈ K×D represents a geometrical figure comprising K landmarks in D dimensions, with K > D. X is called the landmark coordinate matrix, see [30].
Consider an independent sample of landmark coordinate matrices X i ∈ K×D , i = 1, 2, . . . , n, from a given population.
The statistical model of this work is a generalization of the perturbation law used by [30]. If μ ∈ K×D is the corresponding mean form, then we propose the model The parameters of interest are (μ, Σ K ⊗ Σ D ) and the nuisance parameters are (Γ T i , t i ) i = 1, 2, ..., n. A detailed explanation of the corresponding Gaussian perturbation model can be found in [30]. Now, note that then the model (2.5) can be written in the form Recall that vec(yx T ) = x ⊗ y, for vectors x and y, then where E n ii = e n i (e n i ) T and e n i is the ith column unit vector of order n. Finally, observe that

Invariance and nuisance parameters
A first analysis of the elliptical perturbation law involves the nuisance parameters. As in the Gaussian case of [30], we can remove the nuisance parameters by using a simple transformation. From (2.6) Setting X c i = H K X i and using where μ * = H K μ and Σ * . . , n, then the summation of the columns of μ * is zero, which means that it is a centered matrix.
Recall that K > D and rank(Σ * K ) = K − 1, then by [13] and [15] we get By definition, B i is said to have a generalized singular pseudo-Wishart distribution, which is independent of the nuisance parameters.

Remark 2.3. Observe that B i can be written as
In particular, if Σ D = I D and then we have, furthermore, [30] is obtained as a particular case of (2.9); just note that μ * (μ * ) T is the noncentrality parameter of [30] and we have used [36,Definition 10.3.1,. In addition, defining X c as X, we get

Remark 2.4. The corresponding result in
Hence, In the same way of [30], assuming Σ D = I D and using we have

Consistent estimation of μ and Σ K
The Gaussian case of [30] studied the consistent estimation with the Euclidean distance matrix, in our elliptical setting we can go further and use the exact expressions for the first two moments of the matrix B.
Behind the common assumption of independent landmarks along the D axes, we are formally considering Σ D = I D in the context of a matrix variate Gaussian model. However, in the elliptical case the perspective is wider and we have two possible scenarios: 1. Independence and non correlated landmarks 2. Probabilistic dependence and non correlated landmarks.
In both cases Σ D = I D , but the moments of the matrix B are different. In summary, we will find the first two moments of where Σ D = I D . This includes two cases: a) the y d 's are independent and uncorrelated; or b) the y d 's are dependent and uncorrelated.

Moments of B under dependence
First assume that Now, for x, y ∈ n we know that vec xy T = y⊗x and xy T = x⊗y T = y T ⊗x, see [33]. Therefore vec yy T vec T yy T = y ⊗ y T ⊗ y ⊗ y T ; this property will be useful in the following result.
where K KD is the commutation matrix defined in [33].
Some particular values of c 0 and κ 0 are given in Table 1.
Proof. Differentiating (2.2) and using [12], we get  where Y ∈ K×D . Therefore Then, we need to find E y d y T d and E y d y T s ⊗ y d y T s . These moments are derived in the following result.
and Θ = (θ ds ). Then . Finally, the required result is obtained by using K mn (A ⊗ B) = (B ⊗ A)K ts and K mm ≡ K m ; where A ∈ n×s and B ∈ m×t , see [33].
Consider the following notation for certain double summation.
with, mr = p and ns = q. Then we define Let the partitioned matrices A = (A ij ) and B = (B ij ). If denotes the Khatri-Rao product, then see [38, p.30]. In particular, for C = (c ij ), we have Estimation of mean form If we define Proof. This is a consequence of (3.1), (3.2), Definition 3.1 and Theorem 2.
With some errors, [23, Theorem 3.2.13 and Example 3.2.1] derived general and particular examples. For instance, in the central case μ = 0 with D = n−1, they found the factor D(κ 0 − c 2 0 ) instead of the right term D κ 0 − Dc 2 0 . The univariate published version of the one-dimensional Student's t-distribution is also incorrect.

Moments of B under independence
Define Y and μ as follows: where y 1 , y 2 , . . . , y D are independent and we have Cov (y d ⊗ y d ) .
Then, we need to find E y d y T d and These results are summarized next.
Proof. The results follow by taking d = s in Theorem 3.2.

Cov(vec B) = Cov vec YY T = Cov
and Then, Proof. This is a consequence of Theorem 3.4.

Method-of-moments estimators
We return to the original notation Θ = Σ D , Σ = Σ * K and μ = μ * . In this section we will derive the method-of-moments estimators of Σ * K and μ * .
The first two sample moment estimators of B are given by Note that the above expectation holds for independent and dependent cases.

Dependent case
Start with: where σ ii has been previously found in (3.6).
Here σ ii and σ jj were previously computed in (3.6).
Note that s ij comes from the diagonal of S ∈ R K 2 ×K 2 , because s ij = Cov(b ij ).

Remark 3.2.
Special attention must be payed on P , Q ii , R, T ij , and the sign of the square root. They depend on the selected model, the sample statistics s ij and b ij .

Independent case
For this case, Summarizing: Then, the method-of-moments estimators of Σ * K and M = μ * μ * T are given by the following exact expressions: For i = 1, 2, . . . , K:
Here σ ii and σ jj were previously computed in (3.11).

14)
Denote the solution as ( M, Σ * K ). As before, given that s ij = Cov(b ij ), then the s ij 's are in the diagonal of matrix

Remark 3.3. Recall that the method-of-moments estimators are not uniquely defined. For example, assume that the method-of-moments estimator of g(θ)
is required, instead of the estimation of θ. We have a number of different ways. We can obtain the method-of-moments estimator θ of θ and then use it for finding g( θ) as an estimator of g(θ). Alternatively, we can find the moments of the function g(θ) and then obtain the method-of-moments estimator g(θ) of g(θ Principal Coordinate Analysis, summarized by [30], provides the following algorithm for μ * . The procedure finds the estimated coordinates of the mean form by using the method-of-moments estimator M. Note that the mean form is invariant under translation, rotation, and reflection transformations.
Algorithm Initialization: D . The solutions are:
Proof. This is a consequence of Remark 3.3.

Estimation of the form difference
A detailed discussion of Euclidean Distance Matrix, matrix form, form difference and their probabilistic and geometrical properties can be found in [29,30]. Consider the following square symmetric matrix, also known as the Euclidean Distance Matrix: where d(i, j) denotes the Euclidean distance between landmarks i and j. In shape theory this matrix is termed form matrix. Some interesting properties of form matrix are given in [29]. For example, F(X) is a maximal invariant under the group of transformations consisting of translation, rotation, and reflection. Therefore, F(X) retains all the geometrical relevant information about the form of the object. Let X 1 , X 2 , . . . , X n be n independent observations from a population I and let Y 1 , Y 2 , . . . , Y m be m independent observations from a population II. Let μ X be the mean form of population I with corresponding form matrix F μ X . In a similar way denote μ Y and F μ Y for the population II. [30] provides the the following concept:

Definition 5.1. The form difference between population I and II is defined as
where * denotes the Hadamard product and 0/0 = 0. Here A −H denotes the inverse of A with respect to the Hadamard product. [6] give a formula for this inverse in terms of the usual product.
The next result applies the Remark 3.3 in the consistent estimation of the form difference between two populations. It assumes non-isotropic perturbed landmarks within the axes but isotropic relation between them.
Theorem 5.2. Let μ X , Σ * KX ⊗ Σ DX and μ Y , Σ * KY ⊗ Σ DY be the corresponding parameters of populations I and II. Then

Example
The mouse vertebra problem was originally studied in the Gaussian case by [16] (see also [34]). A further analysis under elliptical models was implemented by [8]. The experiment considers the second thoracic vertebra T2 of two groups of mice: large and small. The mice are selected and classified according to large or small body weight; in this case, the sample consists of 23, 23 and 30 large, small and control bones, respectively. The vertebras are digitized and summarized in six mathematical landmarks which are placed at points of high curvature; they are symmetrically selected by measuring the extreme positive and negative curvature of the bone. Figure 1 shows the sample for the three groups with the corresponding (x, y) cartesian coordinates of the six landmarks in the bones. Note that the clusters of the landmarks have different shapes, then the usual normal isotropic model considered in the Gaussian literature is not appropriate, see [16] and the references within. The shape difference analysis among the three groups is quite solved by different approaches. However the correlation structure among landmarks requires more research. Perform inference with such non isotropic shape distributions is very difficult. It has forced the use of strong assumptions about correlation. More than an example, this landmark data is highly valuable for a correlation structure analysis. The symmetry of the vertebra, certainly suggests a priori non isotropic model. The control group is also useful for comparisons and correctness. Theorems 3.6 and 3.8 can be easily implemented for a number of models. We focus on the main novelty (Theorem 3.6) and a Kotz type model (including Gaussian). This model is very flexible and meaningful for various values of the parameters r, s and N , see appendix.
First we illustrate Theorem 3.7 under six different models with independent landmarks. Moment-method estimates of mean shape by using the classical Gaussian model is shown in figure 2. For standard plots, we show the Bookstein's coordinates of the cartesian coordinates computed in the theorems of the paper, see [16]. Then we can compare directly the estimations. Bookstein's coordinates are easily obtained by fixing a reference line with landmarks 1 and 2. Both landmarks are sent to (−0.5, 0) and (0.5, 0), respectively, then we plot the coordinates of the remaining four landmarks of the mean shape. This display is also useful because the mean shape is collected in four landmarks instead of six. In the Gaussian case the estimate is unrealistic, we expect a mean form similar to the ideal vertebra. Note that each point of the polygon represents a landmark of the bone. In the Gaussian case, the assumption of landmark independence explains the broken symmetry in the estimate. However, if we consider more robust isotropic models than Gaussian, the estimation tends to recover the symmetry of the vertebra. Indeed, this is a surprising and interesting aspect to research, because the independence of the landmarks is neglected by the complexity of the model and the mean form gets closer to the ideal vertebra. The addressed evolution from Kotz 1 to Kotz 5 is depicted in figures 3 to 7. The Bookstein's coordinates of the estimated mean form are colored in order to perform comparisons. Here S, L and C, denote the small, large and control groups, respectively.   The addressed heuristic evolution suggests also the role of the analysis in shape data. First note that the mouse vertebra data is based on non anatomical landmarks, then a number of models have equal chance to estimate the mean form and the correlation structure. In this case, we have noticed that the classical isotropic Gaussian model of the literature is not appropriate. Then, we can propose robust laws and a selection criteria. However, for anatomical landmark data modeled by an expert in morphometrics, we must follow the proposed law and robust models cannot be implemented.
We now focus on the dependent case and the moment method estimators of Theorem 3.6. Given that the Gaussian case is out of any consideration, then we study other Kotz models. In order to illustrate the important effect of landmark dependence, we study a simple non Gaussian Kotz model. Consider the Kotz 1 function with N = 2, r = 1/2 and s = 1. We will compare its performance  Table 2 provides comparisons among mean shape estimates of the small group. We include the mean shape by moments of Theorem 3.6, the mean shape by the Fréchet method (see [25]), and the mean shape by Bookstein method (see [2]). We can compare directly the estimations if we use Bookstein's coordinates, i.e. the reference line is given by landmarks 1 and 2. Both landmarks are sent to (−0.5, 0) and (0.5, 0), respectively, then the mean shape is comprised in the remaining four landmarks. Certainly, the estimations are truly similar. Recall that Kotz 1 law with independent landmarks gave a low moment method estimator of mean shape. However, under the expected and realistic dependence, the same model equals the mean shape estimators derived by standard shape theories, see figures 3 and 8, respectively.  Table 2 Mean shape estimation in the small group by Theorem 3.6 (Kotz 1), Fréchet (F), and Bookstein (B). The exact estimation of Theorem 3.6 also agrees with literature about strong difference in Gaussian mean shape between the small (S) and large (L) groups. Figure 8 also shows the mean shape estimation of the control (C) group. As we expect, the control group must tend to show strong symmetry among landmarks, by "averaging" in some sense the small and large estimates.
For a complete analysis we have included the so called Kotz 1, Kotz 2, Kotz 3, Kotz 4 and Kotz 5 models, with parameters (N = 2, s = 1, r = 1/2); (N = 3, s = 1, r = 1/2); (N = 2, s = 2, r = 1/2); (N = 2, s = 3, r = 1/2) and (N = 20, s = 20, r = 1/2), respectively. The appendix gives the technical details about the generalized singular Pseudo-Wishart distributions and the particular Kotz Pseudo-Wishart distributions referred in this example. The corresponding mean shapes estimates were computed for the 5 models, but for reasons of space, we only show the results of the Kotz 5 function. This model was suggested by its performance with Theorem 3.7 and certain selection criteria that we will propose later, see figure 9. The figure shows the sample of the three groups, Small: , Large: +, Control: ×. It also plots the Kotz 5 mean shape joined with colored dash lines. For a simple plot, we have used again the addressed Bookstein's coordinates with landmarks 1 and 2 as the reference base-line. Now we apply the routine of Section 4 for a consistent estimation of the general non-negative definite matrix Σ D . We show the results for the Kotz 5 model. First, we set the tolerance ε 1 = ε 2 = 0.000005 in the three groups (small, large and control). The number of iterations to reach that tolerance in the three groups was 57, 53 and 61, respectively.
The estimated covariance matrices for the small group are given next. For a simple interpretation we provide the correlation matrix ρ instead of Σ. Recall that ρ = (diag(Σ)) For the large group the estimated correlation matrices are: Meanwhile in the control group the estimated correlation matrices are: The three groups reveal null correlation between the axes. However, high correlation among landmarks is found, as we expected from the symmetry of the bones. The estimates in the experimental groups (S, L) detect the differential landmarks for both mean shapes. In the control group, the estimates tends to follow the main contribution of those differential landmarks, as we expect.
We have also run the routines for the models Kotz 1, to Kotz 4 with the same tolerance. They reached the stability between 50 to 70 iterations in the three groups. Similar conclusions about the almost null correlation among axes and strong correlation among landmarks were found in the models. We omit the results for models 1 to 4 and focus on the Kotz 5 model, which is the "best" of them under the following selection criteria.
For a model selection criteria, the control group plays a fundamental role. In this case we look for the minimum coefficient of variation with the small and large groups. We also consider the distance between the small and the large group relative to the mean with controls. We apply a non-Euclidian distance between covariance matrices, a technique due to [17]. The method is appropriate for meaningful correlation matrices, in this case it is performed only for Σ * K . In Tables 3 and 4, the notation K1,..., K5, s, l, c, stand for Kotz 1,..., Kotz 5, small, large and control, respectively. Tables 3 and 4 show the pairwise covariance distances and the percentage of the variation coefficient is presented in parenthesis. We are searching for models which reflect the role of the control group and separate the classes properly. The analysis must be complemented with the mean shape estimates. Moreover, a third criterion considers the distance with another accepted estimate; in this  case we use the Fréchet mean shape. The addressed mean shape distance can be achieved by a number of approaches, see for example [24]. Kotz 3 and Kotz 4 models behave well with low variation coefficient, but they are far from the control group and the sample. If we find the so called Riemannian distance among the moment method estimates and the Fréchet and Bookstein mean shapes, we obtain the results of table 5: The mean shape of the Kotz 5 model is very near to the estimates computed by Fréchet and Bookstein (which are also similar). It also reflects good difference between the small and large groups. Then collecting the results, we can propose Kotz type 5 model as a suitable law for modeling this particular example. Note that this selection agrees with the conclusion proposed in the independent case.
It is important to quote, that a mathematical or statistical selection is just a suggestion. In this case we proceed because the experiment lacks of a priori assumption for anatomical landmark distribution. In our case, literature shows no expert assumption about normality, see [16] and the references therein. It was traditionally set in the Gaussian theory in order to simplify computations.
However, if an expert in morphometrics has set the Gaussian model for this experiment, then the above selection criteria is out of significance. Because, as we have shown in this landmark data, the moments-method estimates does not work properly under Gaussian models. Now, at this stage, the conclusion about Kotz 5 model ratifies that non-Gaussian models explain better the three samples. An elliptical isotropic approach also verified this conclusion, see for example [8].
Once the model is selected, we are interested in application of Section 5, about estimation of mean form difference. In fact, we can go further with hypothesis testing for equality of the associated Euclidean Distance Matrices of two populations.
The methodology can be found in [31] and the references therein. We want to test H 0 : F(μ X ) = cF(μ Y ), for some c > 0, where μ X and μ Y are the population mean shape. Consider a sample of objects X's and Y's. The exact formulae of Theorem 3.6 give the estimated mean shapes μ X and μ Y . Then we can derive the form difference matrix FDM μ X , μ Y . This matrix can define a number of statistics for testing H 0 ; however, [31] recommend the following: where F DM ij is the i, j−element of matrix FDM. Note that if H 0 is true T is close to 1. Moreover, T satisfies the desirable property of invariance under scaling, see [31] for more details.
The null distribution is difficult to obtain even in the Gaussian case. But, we can obtain an empirical null distribution by using a bootstrap procedure, see [31] and the references therein. For similar samples of the current example, those authors propose a bootstrap of size 100.
Once the empirical distribution is obtained, a p-value based on the upper tail of the observed statistics can be computed. It rejects H 0 for small values near to 0.1. Table 6 reports the tests for the Gaussian case and the Kotz 5 model. It shows the p-values for the three possible pairs of groups in this experiment. Under the expected dependent condition of Theorem 3.6, the Gaussian model cannot detect the role of the control test, giving a wrong conclusion. The selected model with covariance distances, separates the control group but does not provide strong evidence of shape difference between small and large bones. This opens a discussion about the coordinate free approach of [31], because the pairwise-element quotient in the matrix form difference is neglecting some important information of this matrix. Then two challenges for a future work can be proposed: 1) Robust definitions of FDM and 2) exact distributions of FDM(X, Y). where Ω = (Σ * K ) − μ * Σ −1 D μ * T . Note that this case assumes a dependent sample X 1 , . . . , X n . 5. Recall that the method-of-moments estimators are not uniquely defined, see Remark 3.3. Then the method-of-moments estimator of Σ D can be obtained from the first two moments of B.

Appendix A: Particular generalized Pseudo-Wishart singular distributions
The following result is a particular case of [13] or [15], when Θ is non singular matrix.
When T = s = 1, and R = 1/2 we get the singular matrix variate gaussian distribution.
Note that particular singular Pseudo-Wishart distributions just depend on the general derivative h (2t) (·) of the elliptical generator function; it seems a trivial fact, but the general formulae involves cumbersome expressions indexed by partitions, see [4]. In the Kotz type distributions they derived the following expressions.
When s = 1, the Kotz type models and their general derivative simplify substantially. Thus, the following expressions applies for Gaussian, Kotz 1, and Kotz 2 models, with parameters N = 1, s = 1, r = 1/2; N = 2, s = 1, r = 1/2; N = 3, s = 1, r = 1/2; respectively. The generator model is given by The required k-th derivative of h, follows from d k dy k exp (−ry s ), which is given by where κ∈P k denotes the summation over all the partitions