Skip to main content
Log in

Improve the efficiency of handcrafted features in image retrieval by adding selected feature generating layers of deep convolutional neural networks

  • Original Paper
  • Published:
Signal, Image and Video Processing Aims and scope Submit manuscript

Abstract

Today, with the rapid growth of communication technology and the development of social networks and smartphones, the amount of data stored by users in the form of images has significantly increased. Therefore, several researchers have focused on the field of image retrieval in the past decade. Image retrieval means retrieving the most similar images to the query sample from a vast image database in terms of content and semantic. So far, various image retrieval methods based on feature engineering or deep features have been proposed. Most methods that use handcrafted features, despite their lower runtime, usually provide lower performance than deep learning-based approaches. In this paper, to increase the efficiency of handcrafted features in image retrieval, their combination with the output of residual blocks in deep neural networks is used in the feature fusion phase. In this paper, the efficiency of feature generation layers in common deep networks such as residual, classical sequential convolution and bottleneck has been analyzed in order to increase the efficiency of image retrieval systems based on handcrafted features. For this purpose, in most popular deep networks, the layers that perform the classification task are removed, and the output of feature generation layers at different depths, with the help of a flattened layer, is converted to feature vectors that can be connected to handcrafted made numerical features in retrieval systems. A variety of popular handcrafted features in the spatial domain, such as color and texture and wavelet in frequency, are employed to generate handcrafted features. This paper presents a novel hybrid feature set based on a combination of classical feature engineering techniques and deep convolutional neural networks suitable for image retrieval. The efficiency of the proposed method is evaluated in terms of precision and recall metrics on the benchmark datasets, such as Corel-1 and -5k. The proposed method provided an accuracy of 96.68 and 94.56% on the Corel-1k and Corel-5k, respectively. The comparative results showed that combining residual block feature maps with handcrafted features compared to bottleneck and sequential classical convolution layers increases the final performance of the image retrieval system. The comparison of the proposed method with state-of-the-art machine learning and deep learning-based techniques shows that the proposed method provides promising results in terms of precision and recall rates.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Availability of data and materials

The datasets generated during and/or analyzed during the current study are available from the corresponding author upon reasonable request. All other used public datasets are available and cited in reference.

References

  1. Chavda, S., Goyani, M.: Recent evaluation on content based image retrieval. IJCSE: Int. J. Comput. Sci. Eng. 7(4), 325–329 (2019)

    Article  Google Scholar 

  2. Pal, M.S., Garg, D.S.K.: Image retrieval: a literature review. IJARCET: Int J. Adv. Res. Comput. Eng. Technol. 2(6), 2278–1323 (2013)

    Google Scholar 

  3. Saritha, R.R., Paul, V., Kumar, P.G.: Content based image retrieval using deep learning process. Clust. Comput. 22(2), 4187–4200 (2019)

    Article  Google Scholar 

  4. Sharif, U., Mehmood, Z., Mahmood, T., Javid, M.A., Rehman, A., Saba, T.: Scene analysis and search using local features and support vector machine for effective content-based image retrieval. Artific. Intell. Rev. 52(2), 901–925 (2019)

    Article  Google Scholar 

  5. Hanif, C.A., Mughal, M.A., Khan, M.A., Tariq, U., Kim, Y.J., Cha, J.-H.: Human gait recognition based on sequential deep learning and best features selection. Comput., Mater. Contin. 75(3), 5123–5140 (2023)

    Google Scholar 

  6. Li, X., Yang, J., Ma, J.: Recent developments of content-based image retrieval (CBIR). Neurocomputing 452, 675–689 (2021)

    Article  Google Scholar 

  7. Zenggang, X., Zhiwen, T., Xiaowen, C., Xue-min, Z., Kaibin, Z., Conghuan, Y.: Research on image retrieval algorithm based on combination of color and shape features. J. Signal Process. Syst. 93(2), 139–146 (2021)

    Article  Google Scholar 

  8. Ismail, W., Khan, M.A., Shah, S.A., Javed, M.Y., Rehman, A., Saba, T.: An adaptive image processing model of plant disease diagnosis and quantification based on color and texture histogram. In: 2020 2nd International Conference on Computer and Information Sciences (ICCIS). IEEE (2020)

  9. Nisa, M., Shah, J.H., Kanwal, S., Raza, M., Khan, M.A., Damaševičius, R.: Hybrid malware classification method using segmentation-based fractal texture analysis and deep convolution neural network features. Appl. Sci. 10(14), 4966 (2020)

    Article  CAS  Google Scholar 

  10. Hamza, A., Attique Khan, M., Wang, S.-H., Alqahtani, A., Alsubai, S., Binbusayyis, A.: COVID-19 classification using chest X-ray images: A framework of CNN-LSTM and improved max value moth flame optimization. Front. Public Health 10, 948205 (2022)

    Article  PubMed  PubMed Central  Google Scholar 

  11. Sharif, M., Attique Khan, M., Rashid, M., Yasmin, M., Afza, F., Tanik, U.J.: Deep CNN and geometric features-based gastrointestinal tract diseases detection and classification from wireless capsule endoscopy images. J. Exp. Theor. Artif. Intell. 33(4), 577–599 (2021)

    Article  Google Scholar 

  12. Latif, A., Rasheed, A., Sajid, U., Ahmed, J., Ali, N., Ratyal, N.I., Zafar, B., Dar, S.H., Sajid, M., Khalil, T.: Content-based image retrieval and feature extraction: a comprehensive review. Math. Probl. Eng. 2019(9658350), 1–21 (2019)

  13. Hor, N., Fekri-Ershad, S.: Image retrieval approach based on local texture information derived from predefined patterns and spatial domain information. arXiv:1912.12978 (2019)

  14. Nanni, L., De Luca, E., Facin, M.L., Maguolo, G.: Deep learning and handcrafted features for virus image classification. J. Imaging 6(12), 143 (2020)

    Article  PubMed  PubMed Central  Google Scholar 

  15. Rashno, A., Rashno, E.: Content-based image retrieval system with most relevant features among wavelet and color features. arXiv preprint arXiv:1902.02059 (2019)

  16. Jain, M., Singh, D.: A survey on CBIR on the basis of different feature descriptor. J. Adv. Math. Comput. Sci. 14(6), 1–13 (2016)

    Google Scholar 

  17. Alsmadi, M.K.: Content-based image retrieval using color, shape and texture descriptors and features. Arab. J. Sci. Eng. 45(4), 3317–3330 (2020)

    Article  Google Scholar 

  18. Latif, A., Rasheed, A., Sajid, U., Ahmed, J., Ali, N., Ratyal, N.I., Zafar, B., Dar, S.H., Sajid, M., Khalil, T.: Content-based image retrieval and feature extraction: a comprehensive review. Math. Probl. Eng. 2019, 21 (2019)

    Article  Google Scholar 

  19. Armi, L., Fekri-Ershad, S.: Texture image analysis and texture classification methods—a review. arXiv preprint arXiv:1904.06554 (2019)

  20. Nazir, A., Ashraf, R., Hamdani, T., Ali, N.: Content based image retrieval system by using HSV color histogram, discrete wavelet transform and edge histogram descriptor. In: 2018 International Conference on Computing, Mathematics and Engineering Technologies (iCoMET). IEEE (2018)

  21. Suhasini, P.S., Krishna, K.S.R., Krishna, I.M.: Content based image retrieval based on different global and local color histogram methods: a survey. J. Inst. Eng. (India): Ser. B 98(1), 129–135 (2017)

    Google Scholar 

  22. Singh, C., Kaur, K.P.: A fast and efficient image retrieval system based on color and texture features. J. Vis. Commun. Image Represent. 41, 225–238 (2016)

    Article  Google Scholar 

  23. Singha, M., Hemachandran, K.: Content based image retrieval using color and texture. Signal Image Process. 3(1), 39 (2012)

    Google Scholar 

  24. Malakar, A., Mukherjee, J.: Image clustering using color moments, histogram, edge and K-means clustering. Int. J. Sci. Res. 2(1), 532–537 (2013)

    Google Scholar 

  25. Shao, H., Wu, Y., Cui, W., Zhang, J.: Image retrieval based on MPEG-7dominant color descriptor. In: 2008 The 9th International Conference for Young Computer Scientists. IEEE (2008)

  26. Duanmu, X. Image retrieval using color moment invariant. in 2010 Seventh International Conference on Information Technology: New Generations. 2010. IEEE.

  27. Wang, X.-Y., Zhang, B.-B., Yang, H.-Y.: Content-based image retrieval by integrating color and texture features. Multimedia Tools Appl. 68(3), 545–569 (2014)

    Article  Google Scholar 

  28. Pradhan, J., Kumar, S., Pal, A.K., Banka, H.: Texture and color visual features based CBIR using 2D DT-CWT and histograms. In: International Conference on Mathematics and Computing. Springer (2018)

  29. Ojala, T., Pietikäinen, M., Mäenpää, T.: A generalized local binary pattern operator for multiresolution gray scale and rotation invariant texture classification. In International Conference on Advances in Pattern Recognition. Springer (2001)

  30. Pietikäinen, M., Ojala, T., Xu, Z.: Rotation-invariant texture classification using feature distributions. Pattern Recogn. 33(1), 43–52 (2000)

    Article  Google Scholar 

  31. Wang, X.-Y., Chen, Z.-F., Yun, J.-J.: An effective method for color image retrieval based on texture. Comput. Stand. Interfaces 34(1), 31–35 (2012)

    Article  Google Scholar 

  32. Liu, Y., Ding, L., Chen, C., Liu, Y.: Similarity-based unsupervised deep transfer learning for remote sensing image retrieval. IEEE Trans. Geosci. Remote Sens. 58(11), 7872–7889 (2020)

    Article  Google Scholar 

  33. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE (2009)

  34. Babenko, A., Slesarev, A., Chigorin, A., Lempitsky, V.: Neural codes for image retrieval. In: European Conference on Computer Vision. Springer (2014)

  35. Hussain, S., Zia, M.A., Arshad, W.: Additive deep feature optimization for semantic image retrieval. Expert Syst. Appl. 170, 114545 (2021)

    Article  Google Scholar 

  36. Li, Z., Liu, F., Yang, W., Peng, S., Zhou, J.: A survey of convolutional neural networks: analysis, applications, and prospects. IEEE Trans. Neural Netw. Learn. Syst. 33(12), 6999–7019 (2022)

  37. Ji, Q., Huang, J., He, W., Sun, Y.: Optimized deep convolutional neural networks for identification of macular diseases from optical coherence tomography images. Algorithms 12(3), 51 (2019)

    Article  MathSciNet  Google Scholar 

  38. Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H.: Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)

  39. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv2: Inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2018)

  40. Shikha, B., Gitanjali, P., Kumar, D.P.: An extreme learning machine-relevance feedback framework for enhancing the accuracy of a hybrid image retrieval system 6, 15–27 (2020)

  41. Ouni, A., Royer, E., Chevaldonné, M., Dhome, M.: Leveraging semantic segmentation for hybrid image retrieval methods. Neural Comput. Appl. 34(24), 21519–21537 (2022)

  42. Galshetwar, G.M., Waghmare, L.M., Gonde, A.B., Murala, S.: Local energy oriented pattern for image indexing and retrieval. J. Vis. Commun. Image Represent. 64, 102615 (2019)

    Article  Google Scholar 

  43. Chen, Y.H., Chang, C.C., Lin, C.C., Hsu, C.Y.: Content-based color image retrieval using block truncation coding based on binary ant colony optimization. Symmetry 11(1), 21 (2019)

    Article  Google Scholar 

  44. Kanaparthi, S.K., Raju, S.N., Shanmukhi, P., Aneesha, G.K., Rahman, M.E.: Image retrieval by integrating global correlation of color and intensity histograms with local texture features. Multimedia Tools Appl. 79, 34875–34911 (2020)

    Article  Google Scholar 

  45. Kelishadrokhi, M.K., Ghattaei, M., Fekri-Ershad, S.: Innovative local texture descriptor in joint of human-based color features for content-based image retrieval. SIViP 17(8), 4009–4017 (2023)

    Article  Google Scholar 

  46. Yu, L., Feng, L., Wang, H., Li, L., Liu, Y., Liu, S.: Multi-trend binary code descriptor: a novel local texture feature descriptor for image retrieval. SIViP 12(2), 247–254 (2018)

    Article  Google Scholar 

  47. Ahmed, K.T., Naqvi, S.A.H., Rehman, A., Saba, T.: Convolution, approximation and spatial information based object and color signatures for content based image retrieval. In: 2019 International Conference on Computer and Information Sciences (ICCIS). IEEE (2019)

  48. Mistry, Y., Ingole, D., Ingole, M.: Content based image retrieval using hybrid features and various distance metric. J. Electr. Syst. Inf. Technol. 5(3), 874–888 (2018)

    Article  Google Scholar 

  49. Bala, A., Kaur, T.: Local texton XOR patterns: a new feature descriptor for content-based image retrieval. Eng. Sci. Technol., In. J. 19(1), 101–112 (2016)

    Google Scholar 

  50. Sathiamoorthy, S., Natarajan, M.: An efficient content based image retrieval using enhanced multi-trend structure descriptor. SN Appl. Sci. 2, 1–206 (2020)

    Google Scholar 

Download references

Funding

We have no relevant financial interests in this manuscript.

Author information

Authors and Affiliations

Authors

Contributions

All authors have contributed equally.

Corresponding author

Correspondence to Shervan Fekri-Ershad.

Ethics declarations

Conflict of interest

The authors declare no competing interests. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. The authors declare the following financial interests/personal relationships which may be considered as potential competing interests.

Ethical approval

Ethical approval is not applicable to this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shamsipour, G., Fekri-Ershad, S., Sharifi, M. et al. Improve the efficiency of handcrafted features in image retrieval by adding selected feature generating layers of deep convolutional neural networks. SIViP 18, 2607–2620 (2024). https://doi.org/10.1007/s11760-023-02934-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11760-023-02934-z

Keywords

Navigation