Abstract
This paper introduces k-splits, an improved hierarchical algorithm based on k-means to cluster data without prior knowledge of the number of clusters. K-splits starts from a small number of clusters and uses the most significant data distribution axis to split these clusters incrementally into better fits if needed. Accuracy and speed are two main advantages of the proposed method. This research experiments on six synthetic benchmark datasets plus two real-world datasets MNIST and Fashion-MNIST, to show that the proposed algorithm has excellent accuracy in automatically finding the correct number of clusters under different conditions. The experimental analysis also indicates that k-splits is faster than similar methods and can even be faster than the standard k-means in lower dimensions. Furthermore, this article delves deeper into the effects of algorithm hyperparameters and dataset parameters on k-splits. Finally, it suggests using k-splits to uncover the exact position of centroids and then input them as initial points to the k-means algorithm to fine-tune the results.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Thorndike, R.L.: Who belongs in the family? Psychometrika 18(4), 267–276 (1953). https://doi.org/10.1007/BF02289263
Rousseeuw, P.J.: Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 20, 53–65 (1987)
Chen, J., Qin, Z., Jia, J.: A weighted mean subtractive clustering algorithm. Inf. Technol. J. 7(2), 356–360 (2008)
Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: a comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020). https://doi.org/10.3390/electronics9081295
Haoxiang, W., Smys, S.: Big Data analysis and perturbation using data mining algorithm. J. Soft Comput. Paradig. 3(1), 19–28 (2021)
Smys, S., Raj, J.S.: Analysis of deep learning techniques for early detection of depression on social media network—a comparative study. J. Trends Comput. Sci. Smart Technol. 3(1), 24–39 (2021)
Steinhaus, H.: Sur la division des corp materiels en parties. Bull. Acad. Polon. Sci., C1. III IV, 801–804 (1956)
Lloyd, S.: Last square quantization in PCM’s. Bell Telephone Laboratories Paper (1957). Published in journal much later: S. P. Lloyd. Least squares quantization in PCM. Special issue on quantization. IEEE Trans. Inform. Theory 28, 129–137 (1982)
Kaufman, L., Rousseeuw, P.J.: Finding Groups in Data: An Introduction to Cluster Analysis, vol. 344. Wiley, Hoboken, NJ, USA (2009)
Peña, J.M., Lozano, J.A., Larrañaga, P.: An empirical comparison of four initialization methods for the K-means algorithm. Pattern Recognit. Lett. 20(10), 1027–1040 (1999). https://doi.org/10.1016/S0167-8655(99)00069-0
Yuan, C., Yang, H.: Research on K-value selection method of K-means clustering algorithm. J—Multi. Sci. J. 2(2), 226–235 (2019). https://doi.org/10.3390/j2020016
Pelleg, D., Moore, A.: X-means: extending K-means with efficient estimation of the number of clusters. In: Proceedings of the 17th International Conf. on Machine Learning, pp. 727–734. Morgan Kaufmann, San Francisco, CA (2000)
Hamerly, G., Elkan, C.: Learning the k in k-means. Adv. Neural. Inf. Process. Syst. 16, 281–288 (2004)
Yuan, F., Meng, Z.H., Zhang, H.X., Dong, C.R.U.: A new algorithm to get the initial centroids. In: Proceedings of 2004 International Conference on Machine Learning and Cybernetics, 2004, vol. 2, pp. 1191–1193
Zechner, M. Granitzer, M.: Accelerating K-means on the graphics processor via CUDA. In: 2009 First International Conference on Intensive Applications and Services, April 2009, pp. 7–15. https://doi.org/10.1109/INTENSIVE.2009.19
Zhang, J., Wu, G., Hu, X., Li, S., Hao, S.: A parallel K-means clustering algorithm with MPI. In: 2011 Fourth International Symposium on Parallel Architectures, Algorithms and Programming, Dec 2011, pp. 60–64
Poteraş, C.M., Mihăescu, C., Mocanu, M.: An optimized version of the K-means clustering algorithm. In: 2014 Federated Conference on Computer Science and Information Systems, FedCSIS 2014, Sept 2014, pp. 695–699
Nazeer, K.A.A., Sebastian, M.P.: Improving the accuracy and efficiency of the k-means clustering algorithm. Proc. World Congr. Eng. I(July), 1–3 (2009)
Golub, G.H., Reinsch, C.: Singular value decomposition and least squares solutions. In: Linear Algebra, pp. 134–151. Springer, Berlin Heidelberg (1971)
Fränti, P., Sieranoja, S.: K-means properties on six clustering benchmark datasets. Appl. Intell. 48(12), 4743–4759 (2018)
LeCun, Y., Cortes, C.: MNIST handwritten digit database. AT&T Labs, 2010 [Online]. Available: http://yann.lecun.com/exdb/mnist
Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arxiv, pp. 1084–1091, Aug. 2017, [Online]. Available: http://arxiv.org/abs/1708.07747
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Mohammadi, S.O., Kalhor, A., Bodaghi, H. (2022). K-Splits: Improved K-Means Clustering Algorithm to Automatically Detect the Number of Clusters. In: Pandian, A.P., Fernando, X., Haoxiang, W. (eds) Computer Networks, Big Data and IoT. Lecture Notes on Data Engineering and Communications Technologies, vol 117. Springer, Singapore. https://doi.org/10.1007/978-981-19-0898-9_15
Download citation
DOI: https://doi.org/10.1007/978-981-19-0898-9_15
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-0897-2
Online ISBN: 978-981-19-0898-9
eBook Packages: EngineeringEngineering (R0)