ClustOfVar: An R Package for the Clustering of Variables

Clustering of variables is as a way to arrange variables into homogeneous clusters, i.e., groups of variables which are strongly related to each other and thus bring the same information. These approaches can then be useful for dimension reduction and variable selection. Several specific methods have been developed for the clustering of numerical variables. However concerning qualitative variables or mixtures of quantitative and qualitative variables, far fewer methods have been proposed. The R package ClustOfVar was specifically developed for this purpose. The homogeneity criterion of a cluster is defined as the sum of correlation ratios (for qualitative variables) and squared correlations (for quantitative variables) to a synthetic quantitative variable, summarizing"as good as possible"the variables in the cluster. This synthetic variable is the first principal component obtained with the PCAMIX method. Two algorithms for the clustering of variables are proposed: iterative relocation algorithm and ascendant hierarchical clustering. We also propose a bootstrap approach in order to determine suitable numbers of clusters. We illustrate the methodologies and the associated package on small datasets.


Introduction
Principal Component Analysis (PCA) and Multiple Correspondence Analysis (MCA) are appealing statistical tools for multivariate description of respectively numerical and categorical data. Rotated principal components fulfill the need to get more interpretable components.
Clustering of variables is an alternative since it makes possible to arrange variables into homogeneous clusters and thus to obtain meaningful structures. From a general point of view, variable clustering lumps together variables which are strongly related to each other and thus bring the same information. Once the variables are clustered into groups such that attributes in each group reflect the same aspect, the practitioner may be spurred on to select one variable from each group. One may also want to construct a synthetic variable. For instance in the case of quantitative variables, a solution is to realize a PCA in each cluster and to retain the first principal component as the synthetic variable of the cluster.
A simple and frequently used approach for clustering a set of variables is to calculate the dissimilarities between these variables and to apply a classical cluster analysis method to this dissimilarity matrix. We can cite the functions hclust of the R package stats (R Development Core Team 2011) and agnes of the package cluster (Maechler, Rousseeuw, Struyf, and Hubert 2005) which can be used for single, complete, average linkage hierarchical clustering. The functions diana and pam of the package cluster can also be used for respectively divisive hierarchical clustering and partitioning around medoids (Kaufman and Rousseeuw 1990). But the dissimilarity matrix has to be calculated first. For quantitative variables many dissimilarity measures can be used: correlation coefficients (parametric or nonparametric) can be converted to different dissimilarities depending if the aim is to lump together correlated variables regardless of the sign of the correlation or if a negative correlation coeffcient between two variables shows disagreement between them. For categorical variables, many association measures can be used as χ 2 , Rand, Belson, Jaccard, Sokal and Jordan among others. Many strategies can then be applied and it can be difficult for the user to choose oneof them. Moreover, no synthetic variable of the clusters are directly provided with this approach.
Besides these classical methods devoted to the clustering of observations, there exists methods specifically devoted to the clustering of variables. The most famous one is the VARCLUS procedure of the SAS software. Recently specific methods based on PCA were proposed by Vigneau and Qannari (2003) with the name Clustering around Latent Variables (CLV) and by Dhillon, Marcotte, and Roshan (2003) with the name Diametrical Clustering. But all these specific approaches work only with quantitative data and as far as we know, they are not implemented in R.
The aim of the package ClustOfVar is then to propose in R, methods specifically devoted to the clustering of variables with no restriction on the type (quantitative or qualitative) of the variables. The clustering methods developed in the package work with a mixture of quantitative and qualitative variables and also work for a set exclusively containing quantitative (or qualitative) variables. In addition note that missing data are allowed: they are replaced by means for quantitative variables and by zeros in the indicator matrix for qualitative variables. Two methods are proposed for the clustering of variables: a hierarchical clustering algorithm and a k-means type partitioning algorithm are respectively implemented in the functions hclustvar and kmeansvar. These two methods are based on PCAMIX, a principal component method for a mixture of qualitative and quantitative variables (Kiers 1991). This method includes the ordinary PCA and MCA as special cases. Here we use a Singular Value Decomposition (SVD) approach of PCAMIX (Chavent, Kuentz, and Saracco 2011). These two clustering algorithms aim at maximizing an homogeneity criterion. A cluster of variables is defined as homogeneous when the variables in the cluster are strongly linked to a central quantitative synthetic variable. This link is measured by the squared Pearson correlation for the quantitative variables and by the correlation ratio for the qualitative variables. The quan-titative central synthetic variable of a cluster is the first principal component of PCAMIX applied to all the variables in the cluster. Note that the synthetic variables of the clusters can be used for dimension reduction or for recoding purpose. Moreover a method based on a bootstrap approach is also proposed to evaluate the stability of the partitions of variables and can be used to determine a suitable number of clusters. It is implemented in the function stability. The rest of this paper is organized as follows. Section 2 contains a detailed description of the homogeneity criterion and a description of the PCAMIX procedure for the determination of the central synthetic variable. Section 3 describes the clustering algorithms and the bootstrap procedure. Section 4 provides two data-driven examples in order to illustrate the use of the functions and objects of the package ClustOfVar. Finally, section 5 gives concluding remarks.

The homogeneity criterion
Let {x 1 , . . . , x p 1 } be a set of p 1 quantitative variables and {z 1 , . . . , z p 2 } a set of p 2 qualitative variables. Let X and Z be the corresponding quantitative and qualitative data matrices of dimensions n×p 1 and n×p 2 , where n is the number of observations. For seek of simplicity, we denote x j ∈ R n the j-th column of X and we denote z j ∈ M 1 × . . . × M p 2 the j-th column of Z with M j the set of categories of z j . Let P K = (C 1 , . . . , C K ) be a partition into K clusters of the p = p 1 + p 2 variables.
Synthetic variable of a cluster C k . It is defined as the quantitative variable y k ∈ R n the "most linked" to all the variables in C k : where r 2 denotes the squared Pearson correlation and η 2 denotes the correlation ratio. More precisely, the correlation ratio η 2 u|z j ∈ [0, 1] measures the part of the variance of u explained by the categories of z j : where n s is the frequency of category s,ū s is the mean value of u calculated on the observations belonging to category s andū is the mean of u.
We have the following important results (Escofier (1979), Saporta (1990), Pagès (2004)): • y k is the first principal component of PCAMIX applied to X k and Z k , the matrices made up of the columns of X and Z corresponding to the variables in C k ; • the empirical variance of y k is equal to: VAR(y k ) = The determination of y k using PCAMIX is carried on according to the following steps: 1. Recoding of X k and Z k : (a)X k is the standardized version of the quantitative matrix X k , (b)Z k = JGD −1/2 is the standardized version of the indicator matrix G of the qualitative matrix Z k , where D is the diagonal matrix of frequencies of the categories. J = I − 1 1/n is the centering operator where I denotes the identity matrix and 1 the vector with unit entries.
2. Concatenation of the two recoded matrices: M k = (X k |Z k ).
3. Singular Value Decomposition of M k : M k = UΛV .

Extraction/calculus of useful outputs:
• √ nUΛ is the matrix of the principal component scores of PCAMIX; • y k is the first column √ nUΛ; • VAR(y k ) = λ k 1 where λ k 1 is the first eigenvalue in Λ.
Note that we recently developed an R package named PCAmixdata with a function PCAmix which provide the principal components of PCAMIX and a function PCArot which provide the principal component after rotation.
Homogeneity H of a cluster C k . It is a measure of adequacy between the variables in the cluster and its central synthetic quantitative variable y k : Note that the first term (based on the squared Pearson correlation r 2 ) measures the link between the quantitative variables in C k and y k independently of the sign of the relationship. The second one (based on the correlation ratio η 2 ) measures the link between the qualitative variables in C k and y k . The homogeneity of a cluster is maximum when all the quantitative variables are correlated (or anti-correlated) to y k and when all the correlation ratios of the qualitative variables are equal to 1. It means that all the variables in the cluster C k bring the same information.
Homogeneity H of a partition P K . It is defined as the sum of the homogeneities of its clusters: where λ 1 1 , . . . , λ K 1 are the first eigenvalues of PCAMIX applied to the K clusters C k of P K .

The clustering algorithms
The aim is to find a partition of a set of quantitative and/or qualitative variables such that the variables within a cluster are strongly related to each other. In other words the objective is to find a partition P K which maximizes the homogeneity function H defined in (2). For this, a hierarchical and a partitioning clustering algorithms are proposed in the package ClustOfVar. A bootstrap procedure is also proposed to evaluate the stability of the partitions into K = 2, 3, . . . , p − 1 clusters and then to help the user to determine a suitable number of clusters of variables.
The hierarchical clustering algorithm. This algorithm builds a set of p nested partitions of variables in the following way: 1.
Step l = 0: initialization. Start with the partition in p clusters.

2.
Step l = 1, . . . , p − 2: aggregate two clusters of the partition in p − l + 1 clusters to get a new partition in p − l clusters. For this, choose clusters A and B with the smallest dissimilarity d defined as: This dissimilarity measures the lost of homogeneity observed when the two clusters A and B are merged. Using this aggregation measure the new partition in p − l clusters maximizes H among all the partitions in p − l clusters obtained by aggregation of two clusters of the partition in p − l + 1 clusters.

3.
Step l = p − 1: stop. The partition in one cluster is obtained.
This algorithm is implemented in the function hclustvar which builds a hierarchy of the p variables. The function plot.hclustvar gives the dendrogram of this hierarchy. The height of a cluster C = A ∪ B in this dendrogram is defined as h(C) = d(A, B). It is easy to verify that h(C) ≥ 0 but the property "A ⊂ B ⇒ h(A) ≤ h(B)" has not been proved yet. Nevertheless, inversions in the dendrogram have never been observed in practice neither on simulated data nor on real data sets. Finally the function cutreevar cuts this dendrogram and gives one of the p nested partitions according to the number K of cluster given in input by the user.
The partitioning algorithm. This partitioning algorithm requires the definition of a similarity measure between two variables of any type (quantitative or qualitative). We use for this purpose the squared canonical correlation between two data matrices E and F of dimensions n × r 1 and n × r 2 . This correlation, denoted by s, can be easily calculated as follows: The procedure for the its determination is simple: first eigenvalue of the n × n matrix EF FE if min(n, r 1 , r 2 ) = n, first eigenvalue of the r 1 × r 1 matrix E FF E if min(n, r 1 , r 2 ) = r 1 , first eigenvalue of the r 2 × r 2 matrix F EE F if min(n, r 1 , r 2 ) = r 2 .
This similarity s can also be defined as follows: -For two quantitative variables x i and x j , let E =x i and F =x j wherex i andx j are the standardized versions of x i and x j . In this case, the squared canonical correlation is the squared Pearson correlation: -For one qualitative variable z i and one quantitative variable x j , let E =Z i and F =x j whereZ i is the standardized version of the indicator matrix G i of the qualitative variable z i . In this case, the squared canonical correlation is the correlation ratio: s(z i , x j ) = η 2 x j |z i .
-For two qualitative variables z i and z j having r and s categories, let E =Z i and F =Z j . In this case, the squared canonical correlation s(z i , z j ) does not correspond to a well known association measure. Its interpretation is geometrical: the closer to one is s(z i , z j ), the closer are the two linear subspaces spanned by the matrices E and F. Then the two qualitative variables z i and z j bring similar information.
This similarity measure is implemented in the function mixedVarSim.
The clustering algorithm implemented in the function kmeansvar builds then a partition in K clusters in the following way: 1. Initialization step: two possibilities are available.
(a) A non random initialization: an initial partition in K clusters is given in input (for instance the partition obtained by cutting the dendrogram of the hierarchy).
(b) A random initialization: i. K variables are randomly selected among the p variables as initial central synthetic variables (named centers hereafter).
ii. An initial partition into K clusters is built by allocating each variable to the cluster with the closest initial center: the similarity between a variable and an initial center is calculated using the function mixedVarSim.

Repeat
(a) A representation step: the quantitative central synthetic variable y k of each cluster C k is calculated with PCAMIX as defined in section 2.
(b) An allocation step: a partition is constructed by assigning each variable to the closest cluster. The similarity between a variable and the central synthetic quantitative variable of the corresponding cluster is calculated with the function mixedVarSim: it is either a squared correlation (if the variable is quantitative) or a correlation ratio (if the variable is qualitative).

Stop if there is no more changes in the partition or if a maximum number of iterations (fixed by the user) is reached.
This iterative procedure kmeansvar provides a partition P K into K clusters which maximizes H but this optimum is local and depends on the initial partition. A solution to overcome this problem and to avoid the influence of the choice of an arbitrary initial partition is to consider multiple random initializations. In this case, steps 1(b), 2 and 3 are repeated, and we propose to retain as final partition the one which provides the highest value of H.

Stability of partitions of variables.
This procedure evaluates the stability of the p nested partitions of the dendrogram obtained with hclustvar. It works as follows: 1. B boostrap samples of the n observations are drawn and the corresponding B dendrograms are obtained with the function hclustvar.
2. The partitions of these B dendrograms are compared with the partitions of the initial hierarchy using the corrected Rand index. The Rand and the adjusted Rand indices are implemented in the function Rand (see Hubert and Arabie (1985) for details on these indices).
3. The stability of a partition is evaluated by the mean of the B adjusted Rand indices.
The plot of this stability criterion according to the number of clusters can help the user in the choice of a sensible and suitable number of clusters. Note that an error message may appear with this function in some case of rare categories of qualitative variable. Indeed, if this rare category disappears in a bootstrap sample of observations, a column of identical values is then formed and the standardization of this variable is not possible in PCAMIX step.

Illustration on simple examples
We illustrate our R package ClustOfVar on two real datasets: the first one only concerns quantitative variables, the second one is a mixture of quantitative and qualitative variables.

First example: Quantitative data
We use the dataset decathlon which contains n = 41 athletes described according to their performances in p = 10 different sports of decathlon.

Cluster Dendrogram
Height Figure 1: Graphical output of the function plot.hclustvar.
In Figure 1, the plot of the aggregation levels suggests to choose 3 or 5 clusters of variables. The dendrogram, on the right hand side of this figure, shows the link between the variables in terms of r 2 . For instance, the two variables "discus" and "shot put" are linked as well as the two variables "Long.jump" and "400m", but the user must keep in mind that the dendrogram does not indicate the sign of these relationships: a careful study of these variables shows that "discus" and "shot put" are correlated whereas "Long .jump" and "400m" are anti-correlated.
The user can use the stability function in order to have an idea of the stability of the partitions of the dendrogram represented in Figure 1.
R> stab <-stability(tree,B=40) R> plot(stab, main="Stability of the partitions") R> stab$matCR R> boxplot(stab$matCR, main="Dispersion of the ajusted Rand index") On the left of Figure 2, the plot of the mean (over the B = 40 bootstrap samples) of the adjusted Rand indices is obtained with the function plot.clustab. It clearly suggests to choose 5 clusters. The boxplots on the right of Figure 2 show the dispersion of these indices over the B = 40 bootstrap replications for partition, and they suggest 3 or 5 clusters.
In the following we choose K = 3 clusters because PCA applied to each of the 3 clusters gives each time only one eigenvalue greater than 1. The function cutree cuts the dendrogram of the hierarchy and gives a partition into K = 3 clusters of the p = 10 variables: R> P3<-cutreevar(tree,3) R> cluster <-P3$cluster R> princomp(X.quanti[,which(cluster==1)],cor=TRUE)$sdev^2 R> princomp(X.quanti[,which(cluster==2)],cor=TRUE)$sdev^2 R> princomp(X.quanti[,which(cluster==3)],cor=TRUE)$sdev^2 The partitionP 3 is contained in an object of class clustvar. Note that partitions obtained with the kmeansvar function are also objects of class clustvar. The function print.clustvar gives a description of the values of this object. R> P3<-cutreevar(tree,3,matsim=TRUE) R> print(P3) Call: cutreevar(obj = tree, k = 3) name description "$var" "list of variables in each cluster" "$sim" "similarity matrix in each cluster" "$cluster" "cluster memberships" "$wss" "within-cluster sum of squares" "$E" "gain in cohesion (in %)" "$size" "size of each cluster" "$scores" "score of each cluster" The value $wss is H(P K ) where the homogeneity function H was defined in (2). The gain in cohesion $E is the percentage of homogeneity which is accounted by the partition P K . It is defined by: The value $sim provides the similarity matrices of the variables in each cluster (calculated with the function mixedVarSim). Note that it is time consuming to perform these similarity matrices when the number of variables is large. Thus they are not calculated by default: matsim=TRUE must be specified in the parameters of the function hclustvar (or kmeansvar) if the user wants this output. We provide below the similarity matrix for the first cluster of this partition into 3 clusters. Note that this 41 × 3 matrix of the scores of the 41 athletes in each cluster of variables is of course different from the 41 × 3 matrix of the scores of the athletes on the first 3 principal components of PCAMIX (here PCA) applied to the initial dataset. The 3 synthetic variables for instance can be correlated whereas the first 3 principal components of PCAMIX are not correlated by construction. But the matrix of the synthetic variables in $scores can be used as the matrix of the principal components of PCAMIX for dimension reduction purpose.

Second example: A mixture of quantitative and qualitative data
We use the dataset wine which contains n = 21 french wines described by p = 31 variables. The first two variables "Label" and "Soil" are qualitative with respectively 3 and 4 categories. The other 29 variables are quantitative. In order to have an idea of the links between these 31 quantitative and qualitative variables, we construct a hierarchy using the function hclustvar.
R> X.quanti <-wine [,c(3:29)] R> X.quali <-wine[,c(1,2)] R> tree <-hclustvar(X.quanti,X.quali) R> plot(tree) In Figure 3, we plot the dendrogram. It shows for instance that the qualitative variable "label" is linked (in term of correlation ratio) with the quantitative variable "Phenolic". The user chooses according to this dendrogram to cut this dendrogram into K = 6 clusters: R> part_hier<-cutreevar(tree,6) R> part_hier$var$"cluster1" A close reading of the output for "cluster1" shows that the correlation ratio between the qualitative variable "Soil" and the synthetic variable of the cluster is about 0.78. The squared correlation between "Odor.Intensity" and the synthetic variable of the cluster is 0.76.
The central synthetic variables of the 6 clusters are in part_hier$scores. This 21×6 quantitative matrix can replace the original 21 × 31 data matrix mixing qualitative and quantitative variables. This matrix of the synthetic variables can then be used for recoding a mixed data matrix (or a qualitative data matrix) into a quantitative data matrix, as is usually done with the matrix of the principal components of PCAMIX.
The function kmeansvar can also provide a partition into K = 6 clusters of the 31 variables.
R> part_km<-kmeansvar(X.quanti,X.quali,init=6,nstart=10) The gain in cohesion of the partition in (4) obtained with the k-means type partitioning algorithm and 10 random initializations is smaller than that of the partition obtained with the hierarchical clustering algorithm ( In practice, simulations and real datasets showed that the quality of the partitions obtained with hclustvar seems to be better than that obtained with kmeansvar. But for large datasets (with a large number of variables), the function hclustvar meets problems of computation time. In this case, the function kmeansvar will be faster.

Concluding remarks
The R package ClustOfVar proposes hierarchical and k-means type algorithms for the clustering of variables of any type (quantitative and/or qualitative).
This package proposes useful tools to visualize the links between the variables and the redundancy in a data set. It is also an alternative to principal component analysis methods for dimension reduction and for recoding qualitative or mixed data matrices into quantitative data matrix. The main difference between PCA and the approach of clustering of variables presented in this paper, is that the synthetic variables of the clusters can be correlated whereas the principal components are not correlated by construction.