Skip to main content
Log in

Abstract

When sampling data of specific classes (i.e., known classes) for a scientific task, collectors may encounter unknown classes (i.e., novel classes). Since these novel classes might be valuable for future research, collectors will also sample them and assign them to several clusters with the help of known-class data. This assigning process is known as novel class discovery (NCD). However, category confusion is common in the sampling process and may make the NCD unreliable. To tackle this problem, this paper introduces a new and more realistic setting, where collectors may misidentify known classes and even confuse known classes with novel classes—we name it NCD under unreliable sampling (NUSA). We find that NUSA will empirically degrade existing NCD methods if taking no care of sampling errors. To handle NUSA, we propose an effective solution, named hidden-prototype-based discovery network (HPDN): (1) we try to obtain relatively clean data representations even with the confusedly sampled data; (2) we propose a mini-batch K-means variant for robust clustering, alleviating the negative impact of residual errors embedded in the representations by detaching the noisy supervision timely. Experiments demonstrate that, under NUSA, HPDN significantly outperforms competitive baselines (e.g., \(6\%\) more than the best baseline on CIFAR-10) and remains robust when encountering serious sampling errors.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Algorithm 1
Fig. 4
Fig. 5

Similar content being viewed by others

Data Availability

We provide the links of our employed datasets to make the data availability statements. CIFAR-10 & CIFAR-100: link. ImageNet: link. CUB: link. Stanford Cars: link.

Notes

  1. The novel classes are currently considered as one class.

References

  • Arthur, D., & Vassilvitskii, S. (2006). k-means++: The advantages of careful seeding (Tech. Rep.). 450 Jane Stanford Way Stanford, CA: Stanford.

  • Canas, G., Poggio, T., & Rosasco, L. (2012). Learning manifolds with k-means and k-flats. Neural Information Processing Systems, 25.

  • Cao, K., Brbic, M., & Leskovec, J. (2022). Open-world semi-supervised learning. In International conference on learning representations.

  • Chen, T., Kornblith, S., Norouzi, M., & Hinton, G. (2020). A simple framework for contrastive learning of visual representations. In International conference on machine learning (pp. 1597–1607). PMLR.

  • Chi, H., Liu, F., Han, B., Yang, W., Lan, L., Liu, T., Niu, G., Zhou, M., & Sugiyama, M. (2022). Meta discovery: Learning to discover novel classes given very limited data. In International conference on learning representations.

  • Deng, J., Dong, W., Socher, R., Li, L. , Li, K., & Li, F. (2009). Imagenet: A large-scale hierarchical image database. In IEEE conference on computer vision and pattern recognition (pp. 248–255). IEEE.

  • Fini, E., Sangineto, E., Lathuilière, S., Zhong, Z., Nabi, M., & Ricci, E. (2021). A unified objective for novel class discovery. In IEEE international conference on computer vision (pp. 9284–9292).

  • Han, K., Rebuffi, S., Ehrhardt, S., Vedaldi, A., & Zisserman, A. (2020). Automatically discovering and learning new visual categories with ranking statistics. In International conference on learning representations.

  • Han, K., Vedaldi, A., & Zisserman, A. (2019). Learning to discover novel visual categories via deep transfer clustering. In IEEE international conference on computer vision (pp. 8401–8409).

  • Han, B., Yao, Q., Yu, X., Niu, G., Xu,M., Hu,W., Tsang, I., & Sugiyama M. (2018). Co-teaching: Robust training of deep neural networks with extremely noisy labels. Neural Information Processing Systems, 31.

  • Han, K., Rebuffi, S.-A., Ehrhardt, S., Vedaldi, A., & Zisserman, A. (2022). AutoNovel: Automatically discovering and learning novel visual categories. IEEE Transactions on Pattern Analysis and Machine Intelligence. https://doi.org/10.1109/TPAMI.2021.3091944

    Article  PubMed  Google Scholar 

  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In IEEE conference on computer vision and pattern recognition (pp. 770–778).

  • Hsu, Y., Lv, Z., & Kira, Z. (2018). Learning to cluster in order to transfer across domains and tasks. In International conference on learning representations.

  • Hsu, Y., Lv, Z., Schlosser, J., Odom, P., & Kira, Z. (2019). Multi-class classification without multi-class labels. In International conference on learning representations.

  • Krause, J., Stark, M., Deng, J., & Fei-Fei, L. (2013). 3D object representations for fine-grained categorization. In IEEE international conference on computer vision workshops (pp. 554–561).

  • Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images. Technical Report TR-2009, University of Toronto.

  • Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Neural Information Processing Systems, 25.

  • Kuhn, H. W. (1955). The Hungarian method for the assignment problem. Naval Research Logistics Quarterly, 2(1–2), 83–97. https://doi.org/10.1002/nav.3800020109

    Article  MathSciNet  Google Scholar 

  • Li, X., Liu, T., Han, B., Niu, G., & Sugiyama, M. (2021). Provably end-to-end label-noise learning without anchor points. In International conference on machine learning (pp. 6403–6413). PMLR.

  • Li, J. , Socher, R., & Hoi, S. C. H. (2020). DivideMix: Learning with noisy labels as semi-supervised learning. In International conference on learning representations.

  • Li, J., Zhang, M., Xu, K., Dickerson, J., & Ba, J. (2021). How does a neural network’s architecture impact its robustness to noisy labels? Neural Information Processing Systems, 34, 9788–9803.

    Google Scholar 

  • Liu, S., Niles-Weed, J., Razavian, N., & Fernandez-Granda, C. (2020). Early-learning regularization prevents memorization of noisy labels. Advances in Neural Information Processing Systems, 33, 20331–20342.

    Google Scholar 

  • Liu, T., & Tao, D. (2016). Classification with noisy labels by importance reweighting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(3), 447–461. https://doi.org/10.1109/TPAMI.2015.2456899

    Article  PubMed  Google Scholar 

  • Li, Y., Yang, M., Peng, D., Li, T., Huang, J., & Peng, X. (2022). Twin contrastive learning for online clustering. International Journal of Computer Vision, 130(9), 2205–2221. https://doi.org/10.1007/s11263-022-01639-z

    Article  Google Scholar 

  • MacQueen, J. (1967). Some methods for classification and analysis of multivariate observations. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability (vol. 1, no. 14, pp. 281–297).

  • Mahajan, M., Nimbhorkar, P., & Varadarajan, K. R. (2012). The planar k-means problem is NP-hard. Theoretical Computer Science, 442, 13–21.

    Article  MathSciNet  Google Scholar 

  • Ren, M., Zeng, W., Yang, B., & Urtasun, R. (2018). Learning to reweight examples for robust deep learning. In International conference on machine learning (pp. 4334–4343). PMLR.

  • Rosenberg, A., & Hirschberg, J. (2007). V-measure: A conditional entropy-based external cluster evaluation measure. In EMNLP-CoNLL (pp. 410–420).

  • Rousseeuw, P. J. (1987). Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. Journal of Computational and Applied Mathematics, 20, 53–65. https://doi.org/10.1016/0377-0427(87)90125-7

    Article  Google Scholar 

  • Simonyan, K., & Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition. In International conference on learning representations.

  • Tang, M., Marin, D., Ayed, I. B., & Boykov, Y. (2019). Kernel cuts: Kernel and spectral clustering meet regularization. International Journal of Computer Vision, 1275, 477–511. https://doi.org/10.1007/s11263-018-1115-1

    Article  MathSciNet  Google Scholar 

  • Thorndike, R. L. (1953). Who belongs in the family. Psychometrika, 18(4), 267–276.

    Article  Google Scholar 

  • van Rooyen, B., & Williamson, R. C. (2018). A theory of learning with corrupted labels. Journal of Machine Learning Research, 18(228), 1–50. https://doi.org/10.5555/3122009.3290413

    Article  Google Scholar 

  • Van Gansbeke, W., Vandenhende, S., Georgoulis, S., Proesmans, M., & Van Gool, L. (2020). Scan: Learning to classify images without labels. In European conference on computer vision (pp. 268–285). Springer International Publishing.

  • Vaze, S., Han, K., Vedaldi, A. & Zisserman, A. (2022). Generalized category discovery. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 7492–7501).

  • Wah, C., Branson, S., Welinder, P., Perona, P., & Belongie, S. (2011). The Caltech-UCSD Birds-200-2011 dataset. Technical Report CNS-TR-2011-001.

  • Xia, X., Liu, T., Han, B., Wang, N., Gong, M., Liu, H., & Sugiyama, M. (2020). Part-dependent label noise: Towards instance-dependent label noise. Neural Information Processing Systems, 33, 7597–7610.

    Google Scholar 

  • Xie, J., Girshick, R. B., & Farhadi, A. (2016). Unsupervised deep embedding for clustering analysis. In International conference on machine learning (pp. 478–487). PMLR.

  • Yang, M., Zhu, Y., Yu, J., Wu, A., & Deng, C. (2022). Divide and conquer: Compositional experts for generalized novel class discovery. In IEEE/CVF conference on computer vision and pattern recognition (pp. 14268–14277).

  • Yang, X., Deng, C., Wei, K., Yan, J., & Liu, W. (2020). Adversarial learning for robust deep clustering. Neural Information Processing Systems, 33, 9098–9108.

    Google Scholar 

  • Yao, Y., Liu, T., Han, B., Gong, M., Deng, J., Niu, G., & Sugiyama, M. (2020). Dual T: Reducing estimation error for transition matrix in label-noise learning. Neural Information Processing Systems, 33, 7260–7271.

    Google Scholar 

  • Zhan, X., Xie, J., Liu, Z., Ong, Y.-S., & Loy, C. C. (2020). Online deep clustering for unsupervised representation learning. In IEEE conference on computer vision and pattern recognition (pp. 6688–6697).

  • Zhang, S., Khan, S., Shen, Z., Naseer, M., Chen, G., & Khan, F. (2023). PromptCAL: Contrastive affinity learning via auxiliary prompts for generalized novel category discovery. In CVPR (pp. 3479–3488).

  • Zhang, C., Bengio, S., Hardt, M., Recht, B., & Vinyals, O. (2021). Understanding deep learning requires rethinking generalization Understanding deep learning requires rethinking generalization. Communications of the ACM, 643, 107–115. https://doi.org/10.1145/3446776

    Article  Google Scholar 

  • Zhao, B., & Han, K. (2021). Novel visual category discovery with dual ranking statistics and mutual knowledge distillation. Neural Information Processing Systems, 4, 22982–22994.

    Google Scholar 

  • Zhong, Z., Fini, E., Roy, S., Luo, Z., Ricci, E., & Sebe, N. (2021a). Neighborhood contrastive learning for novel class discovery. In IEEE conference on computer vision and pattern recognition (pp. 10867–10875).

  • Zhong, Z., Zhu, L., Luo, Z., Li, S., Yang, Y., & Sebe, N. (2021b). Openmix: Reviving known knowledge for discovering novel visual categories in an open world. In IEEE conference on computer vision and pattern recognition (pp. 9462–9470).

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (No. 91948303-1, No. 62372459, No. 62376282). We would like to thank the editor and reviewers for their valuable comments that were very useful for improving the quality of this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haoang Chi.

Additional information

Communicated by Zhun Zhong.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chi, H., Yang, W., Liu, F. et al. Does Confusion Really Hurt Novel Class Discovery?. Int J Comput Vis (2024). https://doi.org/10.1007/s11263-024-02012-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11263-024-02012-y

Keywords

Navigation