Skip to main content
Log in

Image retrieval based on deep Tamura feature descriptor

  • Regular Paper
  • Published:
Multimedia Systems Aims and scope Submit manuscript

    We’re sorry, something doesn't seem to be working properly.

    Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Abstract

Various levels of visual features have different effects in image retrieval, and deep features can express higher-level features or semantic information. Tamura texture feature belongs to the handcrafted feature, and it can represent texture features corresponding to human visual perception. However, exploiting deep Tamura texture within deep learning models and aggregating them into a more discriminative representation remains challenging. To address this problem, we proposed a novel image retrieval approach named deep Tamura feature descriptor (DTFD). The main highlights are as follows: (1) We exploit Tamura texture features within the deep feature maps to provide deep Tamura features. It can improve the discriminative power of deep features. (2) We propose a new spatial layout optimization method to provide the optimized deep Tamura feature maps. It can highlight the target objects and reduce the negative effects of background noise, thereby improving retrieval performance. (3) We combine the advantages of Tamura texture features and deep features to provide a more effective yet compact representation. Comparative experimental results on five famous benchmark datasets demonstrate that deep Tamura feature descriptor provides the satisfying results and outperforms some existing state-of-the-art methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Data availability

The author has used third-party data and therefore does not own the data, and those benchmark datasets are public. The code of the proposed method is available on request.

References

  1. Smeulders, A.W.M., Worring, M., Santini, S., et al.: Content-based image retrieval at the end of the early years. IEEE Trans. Pattern Anal. Mach. Intell. 22, 1349–1380 (2000)

    Article  Google Scholar 

  2. Liu, G.-H., Wei, Z.: Image retrieval using the fused perceptual color histogram. Comput. Intell. Neurosci. 2020(2020), Article 8876480

    Article  Google Scholar 

  3. Ji-Zhao, H., Guang-Hai, L., Shu-Xiang, S.: Content-based image retrieval using color volume histograms. Int. J. Pattern Recognit Artif Intell. 33(9), 1940010 (2019)

    Google Scholar 

  4. Liu, G.-H., Li, Z.-Y., Zhang, L., Xu, Y.: Image retrieval based on micro-structure descriptor. Pattern Recogn. 44(9), 2123–2133 (2011)

    Article  Google Scholar 

  5. Liu, G.-H., Yang, J.-Y.: Content-based image retrieval using color deference histogram. Pattern Recogn. 46(1), 188–198 (2013)

    Article  Google Scholar 

  6. Thompson, E.M., Biasotti, S.: Description and retrieval of geometric patterns on surface meshes using an edge-based LBP approach. Pattern Recogn. 82, 1–15 (2018)

    Article  Google Scholar 

  7. Dubey, S.R., Singh, S.K., Singh, R.K.: Multichannel decoded local binary patterns for content-based image retrieval. IEEE Trans. Image Process. 25(9), 4018–4032 (2016)

    Article  MathSciNet  Google Scholar 

  8. Cross, G., Jain, A.: Markov random field texture models. IEEE Trans. Pattern Anal. Mach. Intell. 5(1), 25–39 (1983)

    Article  Google Scholar 

  9. Tyagi, V.: Content-Based Image Retrieval: Ideas. Influences, and Current Trends. Springer, Singapore (2017)

    Book  Google Scholar 

  10. Liu, Y., Zhang, D.S., Lu, G.J., Ma, W.Y.: A survey of content-based image retrieval with high-level semantics. Pattern Recogn. 40(1), 262–282 (2007)

    Article  Google Scholar 

  11. Liu, G.-H., Zhang, L., et al.: Image retrieval based on multi-texton histogram. Pattern Recogn. 43(7), 2380–2389 (2010)

    Article  Google Scholar 

  12. Alzu’Bi, A., Amira, A., Ramzan, N.: Content-based image retrieval with compact deep convolutional features. Neurocomputing 249, 95–105 (2017)

    Article  Google Scholar 

  13. Peralta, D., Triguero, I., Garcia, S., Saeys, Y., Benitez, J.M., Herrera, F.: On the use of convolutional neural networks for robust classification of multiple fingerprint captures. Int. J. Intell. Syst. 33(1), 213–230 (2018)

    Article  Google Scholar 

  14. Tzelepi, M., Tefas, A.: Deep convolutional learning for content-based image retrieval. Neurocomputing 275(31), 2467–2478 (2018)

    Article  Google Scholar 

  15. Pang, S., Zhu, J., Wang, J., Ordonez, V., Xue, J.: Building discriminative CNN image representations for object retrieval using the replicator equation. Pattern Recogn. 83, 150–160 (2018)

    Article  Google Scholar 

  16. Haralick, R.M., Shanmugam, K., Dinstein, I.: Textural feature for image classification. IEEE Trans. Syst. Man Cybern. SMC 3(6), 610–621 (1973)

    Article  Google Scholar 

  17. Tamura, H., Mori, S., Yamawaki, T.: Textural features corresponding to visual perception. IEEE Trans. Syst. Man Cybern. 8(6), 460–473 (1978)

    Article  Google Scholar 

  18. Manjunath, B.S., Ma, W.Y.: Texture features for browsing and retrieval of image data. IEEE Trans. Pattern Anal. Mach. Intell. 18(8), 837–842 (1996)

    Article  Google Scholar 

  19. Liu, G.-H., Yang, J.-Y.: Deep-seated features histogram: a novel image retrieval method. Pattern Recogn. 116, 107926 (2021)

    Article  Google Scholar 

  20. Liu, G.-H., Yang, J.-Y.: Exploiting deep textures for image retrieval. Int. J. Mach. Learn. Cybern. 14, 483–494 (2023)

    Article  Google Scholar 

  21. Lu, Z., Liu, G.-H., Lu, F., et al.: Image retrieval using dual-weighted deep feature descriptor[J]. Int. J. Mach. Learn. Cybern. 14, 643–653 (2023)

    Article  Google Scholar 

  22. Xu, J., Wang, C., Qi, C., et al.: Unsupervised semantic-based aggregation of deep convolutional features. IEEE Trans. Image Process. 28(2), 601–611 (2019)

    Article  MathSciNet  Google Scholar 

  23. Lu, F., Liu, G.-H.: Image retrieval using contrastive weight aggregation histograms. Digit. Signal Process. 123, 103457 (2022)

    Article  Google Scholar 

  24. Zhang, Z., Wang, L., Wang, Y., Zhou, L., Zhang, J., Chen, F.: Dataset-driven unsupervised object discovery for region-based instance image retrieval. IEEE Trans. Pattern Anal. Mach. Intell. 45(1), 247–263 (2023)

    Article  Google Scholar 

  25. Liu, G.-H., Li, Z.-Y., Yang, J.-Y., Zhang, D.: Exploiting sublimated deep features for image retrieval. Pattern Recogn. 147, 110076 (2024)

    Article  Google Scholar 

  26. Tzu-Chuen, Lu., Chin-Chen, C.: Color image retrieval technique based on color features and image bitmap. Inf. Process. Manage. 43, 461–472 (2007)

    Article  Google Scholar 

  27. Hong, B., Soatto, S.: Shape matching using multiscale integral invariants. IEEE Trans. Pattern Anal. Mach. Intell. 37(1), 151–160 (2015)

    Article  Google Scholar 

  28. Liu, G.-H., Yang, J.-Y., Li, Z.Y.: Content-based image retrieval using computational visual attention model. Pattern Recogn. 48(8), 2554–2566 (2015)

    Article  Google Scholar 

  29. Kayhan, N., Fekri-Ershad, S.: Content based image retrieval based on weighted fusion of texture and color features derived from modified local binary patterns and local neighborhood difference patterns. Multimed. Tools Appl. 80, 32763–32790 (2021)

    Article  Google Scholar 

  30. Tadi Bani, N., Fekri-Ershad, S.: Content-based image retrieval based on combination of texture and colour information extracted in spatial and frequency domains. Electron. Library 37(4), 650–666 (2019)

    Article  Google Scholar 

  31. Clement, M., Kurtz, C., Wendling, L.: Learning spatial relations and shapes for structural object description and scene recognition. Pattern Recogn. 84, 197–210 (2018)

    Article  Google Scholar 

  32. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)

    Article  Google Scholar 

  33. Csurka, G., Dance, C.R., Fan, L. et al.: Visual categorization with bags of keypoints, Workshop on Statistical Learning in Computer Vision, ECCV. 1–22 (2004)

  34. Jégou, H., Perronnin, F., Douze, M., Sánchez, J., Pérez, P., Schmid, C.: Aggregating local image descriptors into compact codes. IEEE Trans. Pattern Anal. Mach. Intell. 34(9), 1704–1716 (2012)

    Article  Google Scholar 

  35. Perronnin, F., Liu, Y., Sanchez, J., Poirier, H.: Large-scale image retrieval with compressed Fisher vectors. In: CVPR, 3384–3391 (2010)

  36. Jégou, H., Douze, M., Schmid, C., Perez, P.: Aggregating local descriptors into a compact image representation. In: CVPR, 3304–3311 (2010)

  37. Razavian, A.S., Azizpour, H., Sullivan, J., Carlsson, S.: CNN features off-the-shelf: an astounding baseline for recognition. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 512–519 (2014)

  38. Gong, Y., Wang, L., Guo, R., Lazebnik, S.: Multi-scale orderless pooling of deep convolutional activation features. Eur. Conf. Comput. Vis. 8695, 392–407 (2014)

    Google Scholar 

  39. Jun, H., Ko, B., Kim, Y., Kim, I., and Kim, J.: Combination of multiple global descriptors for image retrieval, arXiv preprint. arXiv:1903.10663 (2019)

  40. Babenko, A., Slesarev, A., Chigorin, A., Lempitsky, V.: Neural codes for image retrieval. Eur. Conf. Comput. Vis. 8689, 584–599 (2014)

    Google Scholar 

  41. Azizpour, H., Razavian, A.S., Sullivan, J., Maki, A., Carlsson, S.: From generic to specific deep representations for visual recognition. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), (2015)

  42. Yandex, A.B., Lempitsky, V., Aggregating local deep features for image retrieval. In: 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, pp. 1269–1277 (2015)

  43. Tolias, G., Sicre, R., Jégou, H.: Particular object retrieval with integral max-pooling of CNN activations. In: International Conference on Learning Representations, pp. 1–12 (2016)

  44. Kalantidis, Y., Mellina, C., Osindero, S.: Cross-dimensional weighting for aggregated deep convolutional features. In: European Conference on Computer Vision, pp. 685–701 (2016)

  45. Zhou, J., Gan, J., Gao, W., Liang, A.: Image retrieval based on aggregated deep features weighted by regional significance and channel sensitivity. Inf. Sci. 577, 69–80 (2021)

    Article  MathSciNet  Google Scholar 

  46. Zhang, B.-J., Liu, G.-H., Hu, J.-K.: Filtering deep convolutional features for image retrieval. Int. J. Pattern Recognit Artif Intell. 36(1), 2252003 (2022)

    Article  Google Scholar 

  47. Philbin, J., Chum, O., Isard, M., Sivic, J., Zisserman, A.: Object retrieval with large vocabularies and fast spatial matching. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2007)

  48. Philbin, J., Chum, O., Isard, M., Sivic, J., Zisserman, A.: Lost in quantization: improving object retrieval in large scale image databases. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, pp. 1–8 (2008)

  49. Douze, M., Jegou, H., Schmid, C.: Hamming embedding and weak geometry consistency for large scale image search. In: Proceedings of the 10th European conference on Computer vision, (2008)

  50. Dosovitskiy, A., Beyer, L., et al.: An image is worth 16x16 words: transformers for image recognition at scale. In: International Conference on Learning Representations (ICLR), (2021). https://arxiv.org/pdf/2010.11929

  51. Tan, M., Le, Q.: Efficientnet: rethinking model scaling for convolutional neural networks. Int. Conf. Mach. Learn. PMLR 2019, 6105–6114 (2019)

    Google Scholar 

  52. Ding, X., Zhang, X., Ma, N. et al.: RepVGG: Making VGG-style ConvNets Great Again, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 13728–13737 (2021)

  53. He, K., Zhang, X., Ren, S., and Sun, J.: Deep residual learning for image recognition. IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

  54. Simonyan, K. and Zisserman, A.: Very deep convolutional networks for large-scale image recognition. International Conference on Learning Representations. arxiv preprint: 1409. 1556v6 (2015)

  55. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A. and Chen, L.: MobileNetV2: Inverted Residuals and Linear Bottlenecks. IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018)

  56. Jégou, H. and Zisserman, A.: Triangulation embedding and democratic aggregation for image search. IEEE Conference on Computer Vision and Pattern Recognition, pp, 3310–3317 (2014)

  57. Arandjelović, R., Gronat, P., Torii, A., et al.: NetVLAD: CNN architecture for weakly supervised place recognition. IEEE Trans. Pattern Anal. Mach. Intell. (PAMI) 40(6), 1437–1451 (2016)

    Article  Google Scholar 

  58. Liao, K., Huang, G., Zheng, Y., Lin, G., Cao, C.: Approximate object location deep visual representations for image retrieval. Displays 77, 102376 (2023)

    Article  Google Scholar 

  59. Zhang, B., Wang, Q., Lu, X., et al.: Locality-constrained affine subspace coding for image classification and retrieval. Pattern Recognit. 100, 107167 (2020)

    Article  Google Scholar 

  60. Lu, F., Liu, G.-H.: Image retrieval using object semantic aggregation histogram. Cognit. Comput.15:1736–1747 (2023)

    Article  Google Scholar 

  61. Razavian, A.S., Sullivan, J., Carlsson, S., Maki, A.: Visual instance retrieval with deep convolutional networks. ITE Trans. Media Technol. Appl. 4(3), 251–258 (2016)

    Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grant 62266008.

Author information

Authors and Affiliations

Authors

Contributions

Ling-Jie Kong: conceptualization, software, validation, writing, original draft, resources, data curation. Qiaoping He: language modification, review and editing, data validation. Guang-Hai Liu: methodology, review and editing, supervision, revision, funding acquisition, formal analysis.

Corresponding author

Correspondence to Guang-Hai Liu.

Ethics declarations

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Communicated by M. Katsurai.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kong, LJ., He, Q. & Liu, GH. Image retrieval based on deep Tamura feature descriptor. Multimedia Systems 30, 148 (2024). https://doi.org/10.1007/s00530-024-01323-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00530-024-01323-x

Keywords

Navigation