Abstract
Big data is often characterized by a huge volume and mixed types of data including numeric and categorical. The k-prototypes is one of the best-known clustering methods for mixed data. Despite this, it is not suitable to deal with huge volume of data. Several methods have attempted to solve the efficiency problem of the k-prototypes using parallel frameworks. However, none of the existing clustering methods for mixed data, satisfy both accuracy and efficiency. To deal with this issue, we propose a novel parallel k-prototypes clustering method that improves both efficiency and accuracy. The proposed method is based on integrating a parallel approach through Spark framework and implementing a new centers initialization strategy using sampling. Experiments were performed on simulated and real datasets show that the proposed method is scalable and improves both the efficiency and accuracy of the existing k-prototypes methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ahmad, A., Dey, L.: A k-mean clustering algorithm for mixed numeric and categorical data. Data Knowl. Eng. 63(2), 503–527 (2007)
Alkhayrat, M., Aljnidi, M., Aljoumaa, K.: A comparative dimensionality reduction study in telecom customer segmentation using deep learning and PCA. J. Big Data 7(1), 9 (2020)
Ben HajKacem, M.A., Ben N’Cir, C.E., Essoussi, N.: Stimr k-means: an efficient clustering method for big data. Int. J. Pattern Recogn. Artif. Intell. 33, 1950013 (2019)
Ben N’Cir, C.E., Essoussi, N.: Using sequences of words for non-disjoint grouping of documents. Int. J. Pattern Recogn. Artif. Intell. 29(03), 1550013 (2015)
Dean, J., Ghemawat, S.: MapReduce: simplified data processing on large clusters. Commun. ACM 51(1), 107–113 (2008)
Ekanayake, J., et al.: Twister: a runtime for iterative MapReduce. In: Proceedings of the 19th ACM International Symposium on High Performance Distributed Computing, pp. 810–818. ACM (2010)
Forgy, E.W.: Cluster analysis of multivariate data: efficiency versus interpretability of classifications. Biometrics 21, 768–769 (1965)
Fraj, M., HajKacem, M.A.B., Essoussi, N.: A novel tweets clustering method using word embeddings. In: 2018 IEEE/ACS 15th International Conference on Computer Systems and Applications (AICCSA), pp. 1–7. IEEE (2018)
Fraj, M., Ben Hajkacem, M.A., Essoussi, N.: Ensemble method for multi-view text clustering. In: Nguyen, N.T., Chbeir, R., Exposito, E., Aniorté, P., Trawiński, B. (eds.) ICCCI 2019. LNCS (LNAI), vol. 11683, pp. 219–231. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-28377-3_18
Gandomi, A., Haider, M.: Beyond the hype: Big Data concepts, methods, and analytics. Int. J. Inf. Manage. 35(2), 137–144 (2015)
Gmys, J., Mezmaz, M., Melab, N., Tuyttens, D.: A GPU-based branch-and-bound algorithm using integer-vector-matrix data structure. Parallel Comput. 59, 119–139 (2016)
Gorodetsky, V.: Big Data: opportunities, challenges and solutions. In: Ermolayev, V., Mayr, H., Nikitchenko, M., Spivakovsky, A., Zholtkevych, G. (eds.). CCIS, vol. 469, pp. 3–22Springer, Cham (2014). https://doi.org/10.1007/978-3-319-13206-8_1
HajKacem, M.A.B., N’Cir, C.E.B., Essoussi, N.: KP-S: a spark-based design of the k-prototypes clustering for big data. In: 2017 IEEE/ACS 14th International Conference on Computer Systems and Applications (AICCSA), pp. 557–563. IEEE (2017)
Ben HajKacem, M.A., Ben N’cir, C.-E., Essoussi, N.: Scalable random sampling k-prototypes using spark. In: Ordonez, C., Bellatreche, L. (eds.) DaWaK 2018. LNCS, vol. 11031, pp. 317–326. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-98539-8_24
HajKacem, M.A.B., Nćir, C.E.B., Essoussi, N.: One-pass mapreduce-based clustering method for mixed large scale data. J. Intell. Inf. Syst. 52(3), 619–636 (2019)
HajKacem, M.A.B., N’cir, C.-E.B., Essoussi, N.: Overview of scalable partitional methods for Big Data clustering. In: Nasraoui, O., Ben N’cir, C.-E. (eds.) Clustering Methods for Big Data Analytics. USL, pp. 1–23. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-97864-2_1
Huang, Z.: Extensions to the k-means algorithm for clustering large data sets with categorical values. Data Min. Knowl. Disc. 2(3), 283–304 (1998)
Jain, A.K.: Data clustering: 50 years beyond k-means. Pattern Recogn. Lett. 31(8), 651–666 (2010)
Ji, J., Bai, T., Zhou, C., Ma, C., Wang, Z.: An improved k-prototypes clustering algorithm for mixed numeric and categorical data. Neurocomputing 120, 590–596 (2013)
Ji, J., Pang, W., Zheng, Y., Wang, Z., Ma, Z., Zhang, L.: A novel cluster center initialization method for the k-prototypes algorithms using centrality and distance. Appl. Math. Inf. Sci. 9(6), 2933 (2015)
Kacem, M.A.B.H., N’cir, C.E.B., Essoussi, N.: Mapreduce-based k-prototypes clustering method for big data. In: 2015 IEEE International Conference on Data Scienceand Advanced Analytics (DSAA), pp. 1–7. IEEE (2015)
Kang, Q., Träff, J.L., Al-Bahrani, R., Agrawal, A., Choudhary, A.N., Liao, W.K.: Scalable algorithms for MPI intergroup allgather and allgatherv. Parallel Comput. 85, 220–230 (2019)
Li, C., Biswas, G.: Unsupervised learning with mixed numeric and nominal data. IEEE Trans. Knowl. Data Eng. 4, 673–690 (2002)
Luke, E.A.: Defining and measuring scalability. In: Proceedings of Scalable Parallel Libraries Conference, pp. 183–186. IEEE (1993)
Owens, J.D., Houston, M., Luebke, D., Green, S., Stone, J.E., Phillips, J.C.: GPU computing. Proc. IEEE 96, 879–899 (2008)
Shahrivari, S., Jalili, S.: Single-pass and linear-time k-means clustering based on mapreduce. Inf. Syst. 60, 1–12 (2016)
Singh, D., Reddy, C.K.: A survey on platforms for big data analytics. J. Big Data 2(1), 1–20 (2014). https://doi.org/10.1186/s40537-014-0008-6
Snir, M., Gropp, W., Otto, S., Huss-Lederman, S., Dongarra, J., Walker, D.: MPI-the Complete Reference: The MPI Core, vol. 1. MIT Press, Cambridge (1998)
Wang, X., Wang, X., Wilkes, D.M.: An efficient image segmentation algorithm for object recognition using spectral clustering. In: Machine Learning-Based Natural Scene Recognition for Mobile Robot Localization in An Unknown Environment, pp. 215–234. Springer, Singapore (2020). https://doi.org/10.1007/978-981-13-9217-7_11
Xu, R., Wunsch, D.C.: Clustering algorithms in biomedical research: a review. IEEE Rev. Biomed. Eng. 3, 120–154 (2010)
Yoo, R.M., Romano, A., Kozyrakis, C.: Phoenix rebirth: scalable MapReduce on a large-scale shared-memory system. In: 2009 IEEE International Symposium on Workload Characterization (IISWC), pp. 198–207. IEEE (2009)
Zaharia, M., Chowdhury, M., Franklin, M.J., Shenker, S., Stoica, I.: Spark: cluster computing with working sets. HotCloud 10(10), 95 (2010)
Zhao, W., Ma, H., He, Q.: Parallel K-means clustering based on MapReduce. In: Jaatun, M.G., Zhao, G., Rong, C. (eds.) CloudCom 2009. LNCS, vol. 5931, pp. 674–679. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-10665-1_71
Zheng, Z., Gong, M., Ma, J., Jiao, L., Wu, Q.: Unsupervised evolutionary clustering algorithm for mixed type data. In: IEEE Congress on Evolutionary Computation, pp. 1–8. IEEE (2010)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Jridi, H., Ben HajKacem, M.A., Essoussi, N. (2020). Parallel K-Prototypes Clustering with High Efficiency and Accuracy. In: Song, M., Song, IY., Kotsis, G., Tjoa, A.M., Khalil, I. (eds) Big Data Analytics and Knowledge Discovery. DaWaK 2020. Lecture Notes in Computer Science(), vol 12393. Springer, Cham. https://doi.org/10.1007/978-3-030-59065-9_29
Download citation
DOI: https://doi.org/10.1007/978-3-030-59065-9_29
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-59064-2
Online ISBN: 978-3-030-59065-9
eBook Packages: Computer ScienceComputer Science (R0)