Skip to main content
Log in

Combining external attention GAN with deep convolutional neural networks for real–fake identification of luxury handbags

  • Original article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Identifying the fake luxury handbag from their images is crucial to prevent counterfeit. Current state-of-the-art methods focus on developing professional detection devices and some shallower convolution neural network (CNN) models. Such developments employ some professional appraisers with certain prior knowledge or simple CNN models trained by limited datasets to identify the fake luxury handbag, and little attention has been given to external attention optimization mechanism. In addition, existing methods ignore the class imbalance between real and fake handbag datasets, and their prediction capacity is limited. This paper proposes an innovative hybrid framework for fake luxury handbag identification. The proposed method combines external attention generative adversarial networks (EAGANs) with deep convolutional neural networks (DCNNs) with four improvements: (1) EAGAN employs transformer to encode and decode information in both generator and discriminator for more local feature representation; (2) a new attention module based on external attention mechanism is introduced into generator and discriminator of EAGAN for guiding the network to focus on global feature representation; (3) a simple CNN auxiliary classifier is appended to the EAGAN to automatically and efficiently learn the influence of different features, and then, it combines external attention mechanism module to jointly learn the representable features; (4) a three-stage weighted loss is proposed to train the EAGAN model, and then, EAGAN model is combined with DCNN for real–fake identification of luxury handbags. By integrating the above improvements in series, the models’ performances are gradually enhanced. Experimental results show that our framework yields superior results to existing state-of-the-art approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. OECD and E. U. I. P. Office: Trends in Trade in Counterfeit and Pirated Goods (2019)

  2. Antonopoulos, G.A., Hall, A., Large, J., Shen, A.: Counterfeit goods fraud: an account of its financial management. Eur. J. Crim. Policy Res. 26(3), 357–378 (2020)

    Article  Google Scholar 

  3. Singh, D.P., Kastanakis, M.N., Paul, J., Felix, R.: Non‐deceptive counterfeit purchase behavior of luxury fashion products. J. Consum. Behav. (2021).

  4. Wang, Z.X., et al.: High efficiency polarization-encoded holograms with ultrathin bilayer spin-decoupled information metasurfaces. Adv. Opt. Mater. 9(5), 2001609 (2021)

    Article  MathSciNet  Google Scholar 

  5. Rauschnabel, P.A.: Augmented reality is eating the real-world! The substitution of physical products by holograms. Int. J. Inf. Manag. 57, 102279 (2021)

    Article  Google Scholar 

  6. Bove, V.M., Reader, N.A.: Holography and the luxury industry. In: Photonics, vol. 8, no. 6, p. 219. Multidisciplinary Digital Publishing Institute (2021).

  7. Liebel, M., Pazos-Perez, N., van Hulst, N.F., Alvarez-Puebla, R.A.: Surface-enhanced Raman scattering holography. Nat. Nanotechnol. 15(12), 1005–1011 (2020)

    Article  Google Scholar 

  8. Xie, R., Hong, C., Zhu, S., Tao, D.: Anti-counterfeiting digital watermarking algorithm for printed QR barcode. Neurocomputing 167, 625–635 (2015)

    Article  Google Scholar 

  9. Tu, Y.-J., Zhou, W., Piramuthu, S.: Critical risk considerations in auto-ID security: barcode vs. RFID. Decis. Support Syst. 142, 113471 (2021)

    Article  Google Scholar 

  10. Kumar, A., Jain, A.K. (2021). RFID Security issues, defenses, and security schemes. In: Handbook of Research on Machine Learning Techniques for Pattern Recognition and Information Security, pp. 293–310. IGI Global (2021)

  11. Bae, H.J., et al.: Biomimetic microfingerprints for anti-counterfeiting strategies. Adv. Mater. 27(12), 2083–2089 (2015)

    Article  Google Scholar 

  12. Davies, J., Wang, Y.: Physically unclonable functions (PUFs): a new frontier in supply chain product and asset tracking. IEEE Eng. Manag. Rev. 49(2), 116–125 (2021)

    Article  Google Scholar 

  13. Cadarso, V.J., Chosson, S., Sidler, K., Hersch, R.D., Brugger, J.: High-resolution 1D moirés as counterfeit security features. Light Sci. Appl. 2(7), e86–e86 (2013)

    Article  Google Scholar 

  14. Sharma, A., Srinivasan, V., Kanchan, V., Subramanian, L.: The fake vs real goods problem: microscopy and machine learning to the rescue. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 2011–2019 (2017)

  15. Şerban, A., Ilaş, G., & Poruşniuc, G.-C.: SpotTheFake: an initial report on a new CNN-enhanced platform for counterfeit goods detection. arXiv preprint arXiv:2002.06735 (2020)

  16. Vaswani, A., Ramachandran, P., Srinivas, A., Parmar, N., Hechtman, B., Shlens, J.: Scaling local self-attention for parameter efficient visual backbones. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12894–12904 (2021)

  17. Zhao, H., Jia, J., Koltun, V.: Exploring self-attention for image recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10076–10085 (2000).

  18. Kang, T., Lee, K.H.: Unsupervised image-to-image translation with self-attention networks. In: 2020 IEEE International Conference on Big Data and Smart Computing (BigComp), pp. 102–108, IEEE (2020)

  19. Zhang, H., Goodfellow, I., Metaxas, D., & Odena, A.: Self-attention generative adversarial networks. In: International Conference on Machine Learning, pp. 7354–7363. PMLR (2019)

  20. Guo, M.-H., Liu, Z.-N., Mu, T.-J., & Hu, S.-M.: Beyond self-attention: external attention using two linear layers for visual tasks. arXiv preprint arXiv:2105.02358 (2021)

  21. Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., & Jégou, H.: Training data-efficient image transformers & distillation through attention. In International Conference on Machine Learning, pp. 10347–10357. PMLR (2021)

  22. Durall, R., Frolov, S., Dengel, A., & Keuper, J.: Combining transformer generators with convolutional discriminators. arXiv preprint arXiv:2105.10189 (2021)

  23. Cheon, M., Yoon, S.-J., Kang, B., Lee, J.: Perceptual image quality assessment with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 433–442 (2021)

  24. Chen, H., et al.: Pre-trained image processing transformer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12299–12310 (2021).

  25. Creswell, A., White, T., Dumoulin, V., Arulkumaran, K., Sengupta, B., Bharath, A.A.: Generative adversarial networks: an overview. IEEE Signal Process. Mag. 35(1), 53–65 (2018)

    Article  Google Scholar 

  26. Song, H., Wang, M., Zhang, L., Li, Y., Jiang, Z., Yin, G.: S2 RGAN: sonar-image super-resolution based on generative adversarial network. Vis. Comput. 37(8), 2285–2299 (2021)

    Article  Google Scholar 

  27. Zhang, S., Han, Z., Lai, Y.-K., Zwicker, M., Zhang, H.: Stylistic scene enhancement GAN: mixed stylistic enhancement generation for 3D indoor scenes. Vis. Comput. 35(6), 1157–1169 (2019)

    Article  Google Scholar 

  28. Kim, J., Kim, M., Kang, H., Lee, K.: U-gat-it: unsupervised generative attentional networks with adaptive layer-instance normalization for image-to-image translation. arXiv preprint arXiv:1907.10830 (2019)

  29. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  30. Cai, J., Hu, J.: 3D RANs: 3D residual attention networks for action recognition. Vis. Comput. 36(6), 1261–1270 (2020)

    Article  Google Scholar 

  31. Huang, Z., Wang, X., Huang, L., Huang, C., Wei, Y., Liu, W.: CCNet: criss-cross attention for semantic segmentation. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp 603–612 (2019)

  32. Li, X., Zhong, Z., Wu, J., Yang, Y., Lin, Z., & Liu, H.: Expectation-maximization attention networks for semantic segmentation. In: 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 9166–9175 (2019)

  33. Yuan, Y., Chen, X., Wang, J.: Object-contextual representations for semantic segmentation. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part VI 16, pp. 173–190. Springer (2020)

  34. Geng, Z., Guo, M.-H., Chen, H., Li, X., Wei, K., & Lin, Z.: Is attention better than matrix decomposition? arXiv preprint arXiv:2109.04553 (2021)

  35. Jiang, M., Zhai, F., Kong, J.: Sparse attention module for optimizing semantic segmentation performance combined with a multi-task feature extraction network. Vis. Comput. (2021).

  36. Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. Comput. Sci. (2015)

  37. Wang, X., Li, Y., Zhang, H., & Shan, Y.: Towards real-world blind face restoration with generative facial prior. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9168–9178 (2021)

  38. Chen, C., Li, X., Yang, L., Lin, X., Zhang, L., Wong, K.-Y.K.: Progressive semantic-aware style transformation for blind face restoration. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11896–11905 (2021)

  39. Mittal, A., Soundararajan, R., Bovik, A.C.: Making a “completely blind” image quality analyzer. IEEE Signal Process. Lett. 20(3), 209–212 (2013)

    Article  Google Scholar 

  40. Kangjin, W., Yong, Y., Ying, L., Hanmei, L., Lin, M.: FID: a faster image distribution system for docker platform. In: 2017 IEEE 2nd International Workshops on Foundations and Applications of Self* Systems (FAS*W), pp. 191–198 (2017)

  41. Zhou, W., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

  42. Zhang, R., Isola, P., Efros, A.A., Shechtman, E, Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 586–595 (2018)

  43. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  44. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

  45. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708 (2017)

  46. Howard, A.G., et al.: Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)

  47. Kwon, Y.-H., Park, M.-G.: Predicting future frames using retrospective cycle GAN. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1811–1820 (2019)

  48. Wang, T.-C., Liu, M.-Y., Zhu, J.-Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional GANs. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8798–8807 (2018)

Download references

Acknowledgements

This work is supported by the Scientific and Technological Innovation Leading Plan of High-tech Industry of Hunan Province (2020GK2021), the National Natural Science Foundation of China (No. 61902434), and the Natural Science Foundation of Hunan Province, China (No. 2019JJ50826).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianbiao Peng.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Peng, J., Zou, B. & Zhu, C. Combining external attention GAN with deep convolutional neural networks for real–fake identification of luxury handbags. Vis Comput 39, 971–982 (2023). https://doi.org/10.1007/s00371-021-02378-x

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-021-02378-x

Keywords

Navigation