Skip to main content

Boosting Generalized Few-Shot Learning by Scattering Intra-class Distribution

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases: Research Track (ECML PKDD 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14170))

  • 901 Accesses

Abstract

Generalized Few-Shot Learning (GFSL) applies the model trained with the base classes to predict the samples from both base classes and novel classes, where each novel class is only provided with a few labeled samples during testing. Limited by the severe data imbalance between base and novel classes, GFSL easily suffers from the prediction shift issue that most test samples tend to be classified into the base classes. Unlike the existing works that address this issue by either multi-stage training or complicated model design, we argue that extracting both discriminative and generalized feature representations is all GFSL needs, which could be achieved by simply scattering the intra-class distribution during training. Specifically, we introduce two self-supervised auxiliary tasks and a label permutation task to encourage the model to learn more image-level feature representations and push the decision boundary from novel towards base classes during inference. Our method is one-stage and could perform online inference. Experiments on the miniImageNet and tieredImageNet datasets show that the proposed method achieves comparable performance with the state-of-the-art multi-stage competitors under both traditional FSL and GFSL tasks, empirically proving that feature representation is the key for GFSL.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ahn, S., Ko, J., Yun, S.Y.: Cuda: curriculum of data augmentation for long-tailed recognition. arXiv preprint arXiv:2302.05499 (2023)

  2. Baik, S., Choi, J., Kim, H., Cho, D., Min, J., Lee, K.M.: Meta-learning with task-adaptive loss function for few-shot learning. In: ICCV, pp. 9465–9474 (2021)

    Google Scholar 

  3. Brinkmeyer, L., Drumond, R.R., Burchert, J., Schmidt-Thieme, L.: Few-shot forecasting of time-series with heterogeneous channels. In: ECML, pp. 3–18 (2023)

    Google Scholar 

  4. Cao, K., Wei, C., Gaidon, A., Arechiga, N., Ma, T.: Learning imbalanced datasets with label-distribution-aware margin loss. In: NeurIPS (2019)

    Google Scholar 

  5. Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P.: Smote: synthetic minority over-sampling technique. J. Artifi. Intell. Res. 16, 321–357 (2002)

    Article  MATH  Google Scholar 

  6. Chen, Y., Liu, Z., Xu, H., Darrell, T., Wang, X.: Meta-baseline: exploring simple meta-learning for few-shot learning. In: ICCV, pp. 9062–9071 (2021)

    Google Scholar 

  7. Cheng, L., Fang, C., Zhang, D., Li, G., Huang, G.: Compound batch normalization for long-tailed image classification. In: ACM, pp. 1925–1934 (2022)

    Google Scholar 

  8. Cui, Y., Jia, M., Lin, T.Y., Song, Y., Belongie, S.: Class-balanced loss based on effective number of samples. In: CVPR, pp. 9268–9277 (2019)

    Google Scholar 

  9. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: CVPR, pp. 248–255 (2009)

    Google Scholar 

  10. Drummond, C., Holte, R.C., et al.: C4. 5, class imbalance, and cost sensitivity: why under-sampling beats over-sampling. In: Workshop on lEarning from Imbalanced Datasets II, pp. 1–8 (2003)

    Google Scholar 

  11. Gidaris, S., Bursuc, A., Komodakis, N., Pérez, P., Cord, M.: Boosting few-shot visual learning with self-supervision. In: ICCV, pp. 8059–8068 (2019)

    Google Scholar 

  12. Gidaris, S., Komodakis, N.: Dynamic few-shot visual learning without forgetting. In: CVPR, pp. 4367–4375 (2018)

    Google Scholar 

  13. Hariharan, B., Girshick, R.: Low-shot visual recognition by shrinking and hallucinating features. In: ICCV, pp. 3018–3027 (2017)

    Google Scholar 

  14. He, Y., et al.: Attribute surrogates learning and spectral tokens pooling in transformers for few-shot learning. In: CVPR, pp. 9119–9129 (2022)

    Google Scholar 

  15. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)

  16. Jian, Y., Torresani, L.: Label hallucination for few-shot classification. In: AAAI, pp. 7005–7014 (2022)

    Google Scholar 

  17. Kang, D., Kwon, H., Min, J., Cho, M.: Relational embedding for few-shot classification. In: ICCV, pp. 8822–8833 (2021)

    Google Scholar 

  18. Kukleva, A., Kuehne, H., Schiele, B.: Generalized and incremental few-shot learning by explicit learning and calibration without forgetting. In: ICCV, pp. 9020–9029 (2021)

    Google Scholar 

  19. Le, D., Nguyen, K.D., Nguyen, K., Tran, Q.H., Nguyen, R., Hua, B.S.: Poodle: Improving few-shot learning via penalizing out-of-distribution samples. In: NeurIPS, pp. 23942–23955 (2021)

    Google Scholar 

  20. Liu, J., Sun, Y., Han, C., Dou, Z., Li, W.: Deep representation learning on long-tailed data: A learnable embedding augmentation perspective. In: CVPR, pp. 2970–2979 (2020)

    Google Scholar 

  21. Ouali, Y., Hudelot, C., Tami, M.: Spatial contrastive learning for few-shot classification. In: ECML, pp. 671–686 (2021)

    Google Scholar 

  22. Park, S., Lim, J., Jeon, Y., Choi, J.Y.: Influence-balanced loss for imbalanced visual classification. In: ICCV, pp. 735–744 (2021)

    Google Scholar 

  23. Qi, H., Brown, M., Lowe, D.G.: Low-shot learning with imprinted weights. In: CVPR, pp. 5822–5830 (2018)

    Google Scholar 

  24. Rajasegaran, J., Khan, S., Hayat, M., Khan, F.S., Shah, M.: Self-supervised knowledge distillation for few-shot learning. arXiv preprint arXiv:2006.09785 (2020)

  25. Rangwani, H., Aithal, S.K., Mishra, M., et al.: Escaping saddle points for effective generalization on class-imbalanced data. In: NeurIPS, pp. 22791–22805 (2022)

    Google Scholar 

  26. Ren, M., et al.: Meta-learning for semi-supervised few-shot classification. In: ICLR (2018)

    Google Scholar 

  27. Rizve, M.N., Khan, S., Khan, F.S., Shah, M.: Exploring complementary strengths of invariant and equivariant representations for few-shot learning. In: CVPR, pp. 10836–10846 (2021)

    Google Scholar 

  28. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization. In: ICCV, pp. 618–626 (2017)

    Google Scholar 

  29. Snell, J., Swersky, K., Zemel, R.: Prototypical networks for few-shot learning. In: NeurIPS (2017)

    Google Scholar 

  30. Su, J.C., Maji, S., Hariharan, B.: Boosting supervision with self-supervision for few-shot learning. In: ICCV, pp. 8059–8068 (2019)

    Google Scholar 

  31. Su, J.-C., Maji, S., Hariharan, B.: When does self-supervision improve few-shot learning? In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12352, pp. 645–666. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58571-6_38

    Chapter  Google Scholar 

  32. Tian, Y., Wang, Y., Krishnan, D., Tenenbaum, J.B., Isola, P.: Rethinking few-shot image classification: a good embedding is all you need? In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12359, pp. 266–282. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58568-6_16

    Chapter  Google Scholar 

  33. Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al.: Matching networks for one-shot learning. In: NeurIPS, pp. 3630–3638 (2016)

    Google Scholar 

  34. Xie, J., Long, F., Lv, J., Wang, Q., Li, P.: Joint distribution matters: Deep brownian distance covariance for few-shot classification. In: CVPR, pp. 7972–7981 (2022)

    Google Scholar 

  35. Ye, H.J., Hu, H., Zhan, D.C.: Learning adaptive classifiers synthesis for generalized few-shot learning. IJCV 129, 1930–1953 (2021)

    Article  MATH  Google Scholar 

  36. Ye, H.J., Hu, H., Zhan, D.C., Sha, F.: Few-shot learning via embedding adaptation with set-to-set functions. In: CVPR, pp. 8808–8817 (2020)

    Google Scholar 

  37. Yu, T., He, S., Song, Y.Z., Xiang, T.: Hybrid graph neural networks for few-shot learning. In: AAAI, pp. 3179–3187 (2022)

    Google Scholar 

  38. Yu, Y., Zhang, D., Ji, Z.: Masked feature generation network for few-shot learning

    Google Scholar 

  39. Zhang, T., Huang, W.: Kernel relative-prototype spectral filtering for few-shot learning. In: ECCV, pp. 541–557 (2022). https://doi.org/10.1007/978-3-031-20044-1_31

Download references

Acknowledgments

This work is supported in part by the Key R &D Program of Zhejiang Province, China (2023C01043, 2021C01119) and the National Natural Science Foundation of China under Grant (62002320, U19B2043, 61672456).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yunlong Yu .

Editor information

Editors and Affiliations

Ethics declarations

Ethical Statement

In this paper, we mainly investigate the problem of how to balance the base classes and novel classes in the generalized FSL task and propose that what the GFSL needs is extracting both discriminative and generalized feature representations. We consider that the required discussion about ethic and future societal effect is not applicable for our work.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yu, Y., Jin, L., Li, Y. (2023). Boosting Generalized Few-Shot Learning by Scattering Intra-class Distribution. In: Koutra, D., Plant, C., Gomez Rodriguez, M., Baralis, E., Bonchi, F. (eds) Machine Learning and Knowledge Discovery in Databases: Research Track. ECML PKDD 2023. Lecture Notes in Computer Science(), vol 14170. Springer, Cham. https://doi.org/10.1007/978-3-031-43415-0_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-43415-0_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-43414-3

  • Online ISBN: 978-3-031-43415-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics