Collective Solutions on Sets of Stable Clusterings

Two clustering problems are considered. We consider a lot of different clusters of the same data for a given number of clusters. Data clustering is understood as their stable partition into a given number of sets. Clustering is considered stable if the corresponding partitioning remains unchanged with its minimum change. How to create a new cluster based on ensemble clusterings? The second problem is the following. A definition of the committee synthesis as ensemble clustering is introduced. The sets of best and worst matrices of estimates are considered. Optimum clustering is built on the basis of the clusterings obtained as being closest to the set of the best estimation matrices or as the most distant from the set of worst-case matrices of estimates. As a result, the problem of finding the best committee clustering is formulated as a discrete optimization problem on permutations.


Introduction
There are many different approaches to solving the problems of clustering multidimensional data: based on the optimization of internal criteria (indices) [1,2], hierarchical clustering [3], centroid-based clustering [4], density-based clustering [5], distribution-based clustering [6], and many others.There are well-known books and papers on clustering [7][8][9][10].This section is devoted to one approach to the creation of stable clusterings and the processing of their sets.A natural criterion is considered, which is applicable to any clustering method.In work [11], various criteria (indices) are proposed, optimizing which clustering is built with a definite look "what is clustering?"In this chapter, we use a criterion based on stability.If we really got clustering, that is, a solution for the whole sample, the partitioning should not change with a small change in the data.Criteria are introduced for the quality of the partition obtained.If the criterion value is less than one, then the partition is unstable.Let us obtain for the same data N clusterings.How to create a new ensemble clustering based on the N partitions?Previously, a committee method for building ensemble clusterings was proposed [12][13][14][15].Let there be N results of cluster analysis of the same data for l clusters.The committee method of building ensemble clustering makes it possible to build such l clusters, each of which is the intersection of "many" initial clusters.In other words, we find such l clusters whose objects are "equivalent" to each other according to several principles.As initial N clusterings, one can take stable ones.Finally, we consider a video-logical approach to building the initial N coarse clusterings.

Criteria for stability of clustering
Let the sample of objects Χ ¼ x i ; i ¼ 1; 2; …; m f g , x i ∈ R n be given and Κ ¼ K 1 ; K 2 ; …; K l f g is the clustering of the sample into l clusters obtained by some method, Speaking of clustering, we mean applying a method to a sample without focusing on the method itself.Is partition Κ of a sample by this method clustering or here some kind of stopping criterion is satisfied?For example, an extremum of some functional is obtained or the maximum number of operations in the iterative process is fulfilled.We will use the following thesis as the main one.If the resulting partition Κ is indeed clustering, then it must be the same clustering for any minimal change in the sample Χ.Let x i be arbitrary, will be called identity, the clusterings themselves are identical and denoted it as Κ * x i ð Þ≈ Κ.In this case, it is natural to call a partition Κ as stable clustering if the partitions Κ * x i ð Þ and Κ coincide for all x i , i ¼ 1, 2, …, m.In the case of non-identity of some individual Κ * x i ð Þ with Κ, we will call Κ as quasi-clustering.
Definition 1.The quality of quasi-clustering (of unstable clustering) is the quantity , then in this case, we will talk about stable clustering Κ or simply clustering.Suppose that for some i, i ¼ 1, 2, …, m the condition Κ * x i ð Þ≈ Κ is not satisfied, and We will use as a function of the proximity between clustering Κ ∘ x i ð Þ and partitioning Note that to calculate proximity it is required to find the maximum matching in a bipartite graph, for which there is a polynomial algorithm [16].
Definition 2. The quality F min (Κ) of the quasi-clustering Κ will be called the quantity Definition 3. The quality F avr (Κ) of the quasi-clustering Κ will be called the quantity For some clustering algorithms, there are simple economical rules for computing Ф Κ ð Þ.Let us bring them (see also in [3,17,18]).

Method of minimizing the dispersion criterion
It is known that in order to minimize the dispersion criterion, it suffices to satisfy inequalities for any clusters K j and K k , arbitrary x Â ∈ K j , where We establish the conditions for the identity Κ * x i ð Þ≈ Κ of the partitions Κ * x i ð Þ and Κ.In the case x Â ∈ K j [considering (Eq.( 1))] to satisfy the condition

k-means method
Let the clustering Κ be obtained by k-means method, that is, In the case of equality, the object is considered to belong to a cluster with a lower number.Then,

Method of hierarchical agglomeration grouping
We confine ourselves to the case of an agglomeration hierarchical grouping.To find the value of the criterion Here it is possible to save in the calculation of Ф Κ ð Þ without carrying through the clustering for some of " i " .Indeed, let there Κ is a partition obtained by the clustering algorithm X.The main property of the hierarchical grouping is that for any In this case, if at some step t, t ≤ m À l for some k the condition K t k ⊆K j does not hold for all j ¼ 1, 2, …, l, then the condition Κ * x i ð Þ≈ Κ will not be fulfilled.

Examples
We give some examples illustrating the stability criteria introduced.
1. Below are the results obtained for model samples.The method of clustering based on the minimization of the dispersion criterion [3] has been used.As the initial data, we used samples of a mixture of two two-dimensional normal distributions with independent features, different а, and σ.Examples are shown in Figures 1-3 (images of the samples in question) and in Tables 1 and 2. Figure 1 represents a sample of 200 objects for which all the criteria Ф Κ ð Þ, F min (Κ), F avr (Κ) are equal to 1, and the resulting clustering into two clusters is stable clustering.Here we used distributions with parameters Further, with the same parameters а 1 , а 2 , experiments were carried out for Then, we used distributions with parameters In this case, we have the case of strongly intersecting distributions.Formally, the clustering method gives a quasi-clustering, approximately corresponding to the partitioning of the original sample (Figure 3) into two sets by a diagonal from the upper left corner of the picture to the lower right.The values of the criteria in Table 2 were obtained.
2. Data clustering of [19] and criteria values Ф Κ ð Þ, F min (Κ), F avr (Κ).The following data from classification problem of electromagnetic signals were considered: We give the values of the stability criteria obtained.Figure 4 shows the visualization [3] of the sample.The accuracy of the supervised classification methods was about 87% of the correct answers.However, the clustering of data turned out to be only quasi-clustering (Table 3).

Committee synthesis of ensemble clustering
The problem is as follows.There are N clusterings for the same number of clusters.How to choose from them the only one or build a new clustering from the available ones?In the supervised classification problem (with the help of a collective solution of a set of algorithms) there is a criterion according to which one can choose an algorithm from existing ones or build a new algorithm.This is a supervised classification error.This direction in the theory of classification appeared in the early 1970s of the last century [20,21], then was created an algebraic approach [22], various correctors were appeared.The key in the algebraic approach is the creation in the form of special algebraic polynomials of a correct (error-free) algorithm based on a set of supervised classification algorithms.Some algebraic operations on matrices of "degrees of belonging" of recognized objects are used.Various types of correctors were also created [22][23][24][25], when the problem of constructing (and applying) the best algorithm is also solved in two stages.First, the supervised classification algorithms are determined, and then the corrector.This can be, for example, the problem of approximating a given partial Boolean function by some monotonic function.In recent decades, there are conferences on multiple classifier systems, these issues are reflected in the books [21,10].How to choose or create the  best clustering using a finite set of given solutions?Here, all problems are connected primarily with the absence of a single generally accepted criterion.Each clustering algorithm finds such "source" clusters of objects that are "equivalent" to each other.In this chapter, it is proposed to build such a clustering of the initial data, the cluster solutions of which have a large intersection with the initial clusters.
Let the sample of objects Χ ¼ x 1 ; x 2 ; …; x m f g , x i ∈ R n for supervised classification and l classes are given.In the theory of supervised classification, the following definition of the supervised classification algorithm exists [21].Let α ij ∈ 0; 1 f gbe equal to 1 when the object x i , i ¼ 1, 2, …, m is classified by the algorithm A r as x i ∈ K j and 0 otherwise: Here the intersection of classes is allowed.Unlike the supervised classification problem, when clustering a sample, we have freedom in the designation of clusters.
Definition 4. The matrices 1 f g are said to be equivalent if they are equals to within a permutation of the columns.
It is clear that this definition defines a class of equivalent matrices for some matrix.

Definition 5. A clustering algorithm is an algorithm that maps a sample Χ to a set of equivalent information matrices
The number of clusters and the length of the control sample are considered to be given.This definition emphasizes the fact that in an arbitrary partition of a sample into l clusters, we have complete freedom in the numbering of clusters.In what follows we shall always consider matrices of dimension m Â l.

Let there be given N algorithms A c
1 , A c 2 , …, A c N for clustering and their solutions mÂl an arbitrary element of the clustering Therefore, we have There are two problems.
(that is, the construction of some kind of clustering).

2.
Finding the optimal element in Κ c (i.e.finding the best clustering in Κ c ). , provided that Β is the adder and r is the threshold decision rule.
The general scheme of collective synthesis is shown in Figure 5.
We note that the total number of possible values B is bounded from above by a quantity l! ð Þ N .Let s be the operator that performs permutation of columns of matrices m Â l with the help of a substitution < j 1 , j 2 , …, j l >, S ¼ s f g is the set of all operators s.We believe that rs ¼ sr, ∀s ∈ S.
We continue s ∈ S to the n-dimensional case σ . We denote From the definition of the adder it follows that . Therefore, the product rΒ defines the desired mapping and specifies some ensemble clustering.It is necessary to determine the optimal element from Κ c , find it and Ĩ0 .We introduce definitions of potentially best and worst-case solutions.As the "ideal" of the collective solution, we will consider the case when all algorithms give us essentially the same partitions or coverings.
As the distance between two numerical matrices, we consider the function Denote by Μ the set of all contrast matrices, and by M the set of all blurred matrices.We introduce definitions for estimating the quality of matrices.
The set is called the mean blurred matrix.
We note that the optimums according to the criteria (Eq.( 2)) and (Eq.( 3)) do not have to coincide.The sets Μ and Μ intersect.Theorem 1.The sets of optimal solutions by criteria Eqs. ( 2) and ( 4) coincide.

Let us show that
Summing over all the set of values of pairs of indices i, j, we get that We consider the problem of finding optimal ensemble clusterings for the criterion (2).It is clear We introduce the notations , …, N be some permutation of the set π 0 ¼< 1, 2, …, l >.A set of permutations π ¼< π 1 , π 2 , …, π N > uniquely determines the matrix of estimates.
We will further assume that the "initial" matrix kα ν ij k mÂl of the algorithm A c ν corresponds to the permutation π 0 .kα 0 ν ij k mÂl is the matrix of the algorithm A c ν corresponding to some permutation π ν .Then α We convert this expression.The identity Thus, minimizing a function is equivalent to maximizing the second sum of the expression.
After applying the permutations π Figure 7 schematically shows the changes in sets X j , Y j , j ¼ 1, 2, …, l.
Theorem 2 The proof is given in [12,13].Theorem 2 is the basis for creating an effective minimization algorithm of Φ.  Collective Solutions on Sets of Stable Clusterings http://dx.doi.org/10.5772/intechopen.76189 Since the second sum is always not positive, we have an upper bound.We consider the problem of minimizing a function Δ ν .We write out all possible variants of the function P l j¼1 in the form of a table in Figure 8. Then the minimum of this function is reduced to finding the maximum matching of the bipartite graph, for finding which we can use the polynomial Hungarian algorithm [16].
It is clear that min π ν Δ ν ≤ 0. Now we can propose the following heuristic algorithm for steepest descent. Algorithm.

We find Δ
NOTE.We note that our algorithm does not even find a local minimum of the criterion Φ B ð Þ.Nevertheless, this algorithm is very fast, its complexity at each iteration is estimated as O l 5 mN À Á .

The algorithm of collective k-means
Results of clustering by N algorithms of sampling of m objects to l clusters solutions are obtained, which we can write in the form of a binary matrix kα v ij k, ν ¼ 1, 2, …, N, i ¼ 1, 2, …, m, j ¼ 1, 2, …, l.We assume that the cluster numbers in each algorithm are fixed.Then any horizontal layer number i of this three-dimensional matrix will denote the results of object x i clustering.As an ensemble clustering of the sample Χ, we can take the result of clustering the "new" descriptions-the layers of the original matrix kα v ij k, ν ¼ 1, 2, …, n.As a method of clustering, we take the method of minimizing the dispersion criterion.Let there be a lot of N clusterings kα v i1j k, kα v i2j k, …, kα v iN j k with heuristic clustering algorithms, then we calculate their sample mean kα * ν j k as the solution of the problem . Where do we obtain α * v j ¼ 1 N P N μ¼1 α v iμj .Note that this method makes it possible to calculate such ensemble clusterings Κ ¼ K * 1 ; K * 2 ; …; K * l È É that the sets of heuristic clustering of the objects of some cluster of the collective solution will be close to each other in the Euclidean metric.The committee synthesis of collective decisions provides more interpretable solutions.Indeed, if Κ ν ¼ K ν 1 ; K ν 2 ; …; K ν l È É , ν ¼ 1, 2, …, N are separate solutions of heuristic clustering algorithms, then the cluster of collective solution will be the "intersection" of many some original clusters K 1 i1 , K 2 i2 , …, K N i l .

Conclusion
This chapter consists of two parts.First, clustering criteria based on sustainability are introduced.Next, we propose an approach to processing the sets of obtained partitions of the same sample.As the initial clustering, it is better to use stable clustering.It is shown how a person can be used in the construction of the committee synthesis of ensemble clustering.

, otherwise, & where δ i ∈ R. Definition 8 . 1 ; I 0 2 ; …; I 0 N
By the committee synthesis of an information matrix С on an element Ĩ0 ¼ I 0 let us call it a computation by the formula С ¼ rΒ Ĩ0

Figure 6
Figure 6 illustrates the sets of contrasting and blurred matrices.Arrows indicate some elements of sets.

Figure 6 .
Figure 6.The sets of contrasting Μ, blurred Μ matrices, and the set of matrices B f g.

Figure 8 .
Figure 8.All possible variants of P l j¼1 P i ∈ Xj α ν iμ ν j for all admissible j and i.

Table 2 .
Values of quasi-clustering criteria.Сase of very intersecting distributions