Skip to main content

K-Splits: Improved K-Means Clustering Algorithm to Automatically Detect the Number of Clusters

  • Conference paper
  • First Online:
Book cover Computer Networks, Big Data and IoT

Abstract

This paper introduces k-splits, an improved hierarchical algorithm based on k-means to cluster data without prior knowledge of the number of clusters. K-splits starts from a small number of clusters and uses the most significant data distribution axis to split these clusters incrementally into better fits if needed. Accuracy and speed are two main advantages of the proposed method. This research experiments on six synthetic benchmark datasets plus two real-world datasets MNIST and Fashion-MNIST, to show that the proposed algorithm has excellent accuracy in automatically finding the correct number of clusters under different conditions. The experimental analysis also indicates that k-splits is faster than similar methods and can even be faster than the standard k-means in lower dimensions. Furthermore, this article delves deeper into the effects of algorithm hyperparameters and dataset parameters on k-splits. Finally, it suggests using k-splits to uncover the exact position of centroids and then input them as initial points to the k-means algorithm to fine-tune the results.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 189.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 249.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Thorndike, R.L.: Who belongs in the family? Psychometrika 18(4), 267–276 (1953). https://doi.org/10.1007/BF02289263

    Article  Google Scholar 

  2. Rousseeuw, P.J.: Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 20, 53–65 (1987)

    Article  Google Scholar 

  3. Chen, J., Qin, Z., Jia, J.: A weighted mean subtractive clustering algorithm. Inf. Technol. J. 7(2), 356–360 (2008)

    Article  Google Scholar 

  4. Ahmed, M., Seraj, R., Islam, S.M.S.: The k-means algorithm: a comprehensive survey and performance evaluation. Electronics 9(8), 1295 (2020). https://doi.org/10.3390/electronics9081295

    Article  Google Scholar 

  5. Haoxiang, W., Smys, S.: Big Data analysis and perturbation using data mining algorithm. J. Soft Comput. Paradig. 3(1), 19–28 (2021)

    Article  Google Scholar 

  6. Smys, S., Raj, J.S.: Analysis of deep learning techniques for early detection of depression on social media network—a comparative study. J. Trends Comput. Sci. Smart Technol. 3(1), 24–39 (2021)

    Article  Google Scholar 

  7. Steinhaus, H.: Sur la division des corp materiels en parties. Bull. Acad. Polon. Sci., C1. III IV, 801–804 (1956)

    Google Scholar 

  8. Lloyd, S.: Last square quantization in PCM’s. Bell Telephone Laboratories Paper (1957). Published in journal much later: S. P. Lloyd. Least squares quantization in PCM. Special issue on quantization. IEEE Trans. Inform. Theory 28, 129–137 (1982)

    Google Scholar 

  9. Kaufman, L., Rousseeuw, P.J.: Finding Groups in Data: An Introduction to Cluster Analysis, vol. 344. Wiley, Hoboken, NJ, USA (2009)

    Google Scholar 

  10. Peña, J.M., Lozano, J.A., Larrañaga, P.: An empirical comparison of four initialization methods for the K-means algorithm. Pattern Recognit. Lett. 20(10), 1027–1040 (1999). https://doi.org/10.1016/S0167-8655(99)00069-0

    Article  Google Scholar 

  11. Yuan, C., Yang, H.: Research on K-value selection method of K-means clustering algorithm. J—Multi. Sci. J. 2(2), 226–235 (2019). https://doi.org/10.3390/j2020016

  12. Pelleg, D., Moore, A.: X-means: extending K-means with efficient estimation of the number of clusters. In: Proceedings of the 17th International Conf. on Machine Learning, pp. 727–734. Morgan Kaufmann, San Francisco, CA (2000)

    Google Scholar 

  13. Hamerly, G., Elkan, C.: Learning the k in k-means. Adv. Neural. Inf. Process. Syst. 16, 281–288 (2004)

    Google Scholar 

  14. Yuan, F., Meng, Z.H., Zhang, H.X., Dong, C.R.U.: A new algorithm to get the initial centroids. In: Proceedings of 2004 International Conference on Machine Learning and Cybernetics, 2004, vol. 2, pp. 1191–1193

    Google Scholar 

  15. Zechner, M. Granitzer, M.: Accelerating K-means on the graphics processor via CUDA. In: 2009 First International Conference on Intensive Applications and Services, April 2009, pp. 7–15. https://doi.org/10.1109/INTENSIVE.2009.19

  16. Zhang, J., Wu, G., Hu, X., Li, S., Hao, S.: A parallel K-means clustering algorithm with MPI. In: 2011 Fourth International Symposium on Parallel Architectures, Algorithms and Programming, Dec 2011, pp. 60–64

    Google Scholar 

  17. Poteraş, C.M., Mihăescu, C., Mocanu, M.: An optimized version of the K-means clustering algorithm. In: 2014 Federated Conference on Computer Science and Information Systems, FedCSIS 2014, Sept 2014, pp. 695–699

    Google Scholar 

  18. Nazeer, K.A.A., Sebastian, M.P.: Improving the accuracy and efficiency of the k-means clustering algorithm. Proc. World Congr. Eng. I(July), 1–3 (2009)

    Google Scholar 

  19. Golub, G.H., Reinsch, C.: Singular value decomposition and least squares solutions. In: Linear Algebra, pp. 134–151. Springer, Berlin Heidelberg (1971)

    Google Scholar 

  20. Fränti, P., Sieranoja, S.: K-means properties on six clustering benchmark datasets. Appl. Intell. 48(12), 4743–4759 (2018)

    Article  Google Scholar 

  21. LeCun, Y., Cortes, C.: MNIST handwritten digit database. AT&T Labs, 2010 [Online]. Available: http://yann.lecun.com/exdb/mnist

  22. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. arxiv, pp. 1084–1091, Aug. 2017, [Online]. Available: http://arxiv.org/abs/1708.07747

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Seyed Omid Mohammadi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mohammadi, S.O., Kalhor, A., Bodaghi, H. (2022). K-Splits: Improved K-Means Clustering Algorithm to Automatically Detect the Number of Clusters. In: Pandian, A.P., Fernando, X., Haoxiang, W. (eds) Computer Networks, Big Data and IoT. Lecture Notes on Data Engineering and Communications Technologies, vol 117. Springer, Singapore. https://doi.org/10.1007/978-981-19-0898-9_15

Download citation

  • DOI: https://doi.org/10.1007/978-981-19-0898-9_15

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-19-0897-2

  • Online ISBN: 978-981-19-0898-9

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics