Skip to main content
Log in

Data Points Clustering via Gumbel Softmax

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

Abstract

Finding useful patterns in the dataset has been a fascinating topic, and one of the most researched problems in this area is identifying the cluster groups within the dataset. This research paper introduces a “new data clustering method” called Data Points Clustering via Gumbel Softmax (DPCGS) and demonstrates that it is suitable for clustering the data points datasets. We evaluate DPCGS efficiency and clustering quality through several experiments. Experiments show that statistically relevant clustering structures can be identified with our method, depending on the dataset. We also present a performance comparison table where we use datasets such as Wine, Wheat seeds, Iris, Wisconsin breast cancer and compare the DPCGS results with different benchmarking and recently proposed clustering algorithms such as Birch, K-Means, Affinity propagation, Agglomerative clustering, and Mini-batch K-Means and Nested mini-batch K-Means. Our method DPCGS performs better than most of the previously and recently proposed clustering algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25
Fig. 26
Fig. 27
Fig. 28
Fig. 29
Fig. 30
Fig. 31
Fig. 32
Fig. 33
Fig. 34
Fig. 35
Fig. 36
Fig. 37
Fig. 38

Similar content being viewed by others

References

  1. Acharya DB, Zhang H. Community detection clustering via gumbel softmax. SN Comput Sci. 2020;1(5):1–11.

    Article  Google Scholar 

  2. Acharya DB, Zhang H. Feature selection and extraction for graph neural networks. In: Proceedings of the 2020 ACM southeast conference, ACMSE, vol. 20. New York: Association for Computing Machinery; 2020. p. 252–5.

    Chapter  Google Scholar 

  3. Bodenhofer U, Kothmeier A, Hochreiter S. Apcluster: an r package for affinity propagation clustering. Bioinformatics. 2011;27(17):2463–4.

    Article  Google Scholar 

  4. Bottou L, Bengio Y. Convergence properties of the k-means algorithms. In: Advances in neural information processing systems. Berlin: Springer; 1995. p. 585–92.

    Google Scholar 

  5. Buitinck L, Louppe G, Blondel M, Pedregosa F, Mueller A, Grisel O, Niculae V, Prettenhofer P, Gramfort A, Grobler J, Layton R, VanderPlas J, Joly A, Holt B, Varoquaux G. API design for machine learning software: experiences from the scikit-learn project. In: ECML PKDD Workshop: Languages for Data Mining and Machine Learning, 2013, pp. 108–22.

  6. Ding Y, Zhao Y, Shen X, Musuvathi M, Mytkowicz T. Yinyang k-means: a drop-in replacement of the classic k-means with consistent speedup. In: International conference on machine learning. London: PMLR; 2015. p. 579–87.

    Google Scholar 

  7. Dua D, Graff C. UCI machine learning repository, 2017.

  8. Elkan C. Using the triangle inequality to accelerate k-means. In Proceedings of the 20th international conference on Machine Learning (ICML-03), 2003, pp. 147–53.

  9. Fout A, Byrd J, Shariat B, Ben-Hur A. Protein interface prediction using graph convolutional networks. In: Advances in neural information processing systems. Berlin: Springer; 2017. p. 6530–9.

    Google Scholar 

  10. Gupta T, Panda SP. Clustering validation of clara and k-means using silhouette dunn measures on iris dataset. In 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon), 2019, pp. 10–3.

  11. Jang E, Gu S, Poole B. Categorical reparameterization with Gumbel-softmax. Toulon: ICLR; 2017.

    Google Scholar 

  12. Kamal S, Ripon SH, Dey N, Ashour AS, Santhi V. A mapreduce approach to diminish imbalance parameters for big deoxyribonucleic acid dataset. Comput Methods Progr Biomed. 2016;131:191–206.

    Article  Google Scholar 

  13. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–44.

    Article  Google Scholar 

  14. Lloyd S. Least squares quantization in PCM. IEEE Trans Inform Theory. 1982;28(2):129–37.

    Article  MathSciNet  Google Scholar 

  15. Murtagh F, Legendre P. Wards hierarchical agglomerative clustering method: which algorithms implement wards criterion? J Class. 2014;31(3):274–95.

    Article  MathSciNet  Google Scholar 

  16. Newling JP. Novel algorithms for clustering. EPFL: Technical Report; 2018.

  17. Peng K, Leung VC, Huang Q. Clustering approach based on mini batch kmeans for intrusion detection system over big data. IEEE Access. 2018;6:11897–906.

    Article  Google Scholar 

  18. Praveen B, Menon V. Novel deep-learning-based spatial-spectral feature extraction for hyperspectral remote sensing applications. In 2019 IEEE International Conference on Big Data (Big Data), 2019, pp. 5444–52.

  19. Praveen B, Menon V. Study of spatial-spectral feature extraction frameworks with 3-d convolutional neural network for robust hyperspectral imagery classification. IEEE J Sel Top Appl Earth Observ Remote Sens. 2021;14:1717–27.

    Article  Google Scholar 

  20. Rodriguez MZ, Comin CH, Casanova D, Bruno OM, Amancio DR, Costa LDF, Rodrigues FA. Clustering algorithms: a comparative approach. PloS One. 2019;14(1):e0210236.

    Article  Google Scholar 

  21. Rosenberg A, Hirschberg J. V-measure: A conditional entropy-based external cluster evaluation measure. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing, 2007, pp. 410–20.

  22. Sanchez-Gonzalez A, Heess N, Springenberg JT, Merel J, Riedmiller M, Hadsell R, Battaglia P. Graph networks as learnable physics engines for inference and control. Stockholm: ICML; 2018.

    Google Scholar 

  23. Tibshirani R, Walther G, Hastie T. Estimating the number of clusters in a data set via the gap statistic. J R Stat Soc. 2001;63(2):411–23.

    Article  MathSciNet  Google Scholar 

  24. Vassilvitskii S, Arthur D. k-means++: the advantages of careful seeding. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, 2006, pp. 1027–35.

  25. Yuan C, Yang H. Research on k-value selection method of k-means clustering algorithm. J Multidiscip Sci. 2019;2(2):226–35.

    Google Scholar 

  26. Zhang T, Ramakrishnan R, Livny M. Birch: an efficient data clustering method for very large databases. ACM Sigmod Rec. 1996;25(2):103–14.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Deepak Bhaskar Acharya.

Ethics declarations

Conflict of Interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Source Code Available

The code for this research is available on github at: https://github.com/deepakacharyab/data_points_cluster_gumbel_softmax

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Acharya, D.B., Zhang, H. Data Points Clustering via Gumbel Softmax. SN COMPUT. SCI. 2, 311 (2021). https://doi.org/10.1007/s42979-021-00707-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-021-00707-4

Keywords

Navigation