skip to main content
survey

A Survey on Active Deep Learning: From Model Driven to Data Driven

Authors Info & Claims
Published:13 September 2022Publication History
Skip Abstract Section

Abstract

Which samples should be labelled in a large dataset is one of the most important problems for the training of deep learning. So far, a variety of active sample selection strategies related to deep learning have been proposed in the literature. We defined them as Active Deep Learning (ADL) only if their predictor or selector is a deep model, where the basic learner is called the predictor and the labeling schemes are called the selector. In this survey, we categorize ADL into model-driven ADL and data-driven ADL by whether its selector is model driven or data driven. We also introduce the different characteristics of the two major types of ADL, respectively. We summarized three fundamental factors in the designation of a selector. We pointed out that, with the development of deep learning, the selector in ADL also is experiencing the stage from model driven to data driven. The advantages and disadvantages between data-driven ADL and model-driven ADL are thoroughly analyzed. Furthermore, different sub-classes of data-drive or model-driven ADL are also summarized and discussed emphatically. Finally, we survey the trend of ADL from model driven to data driven.

REFERENCES

  1. [1] He Kaiming, Zhang Xiangyu, Ren Shaoqing, and Sun Jian. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). IEEE Computer Society, 770778.Google ScholarGoogle ScholarCross RefCross Ref
  2. [2] Deng Jia, Dong Wei, Socher Richard, Li Li-Jia, Li Kai, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’09). IEEE Computer Society, 248255.Google ScholarGoogle ScholarCross RefCross Ref
  3. [3] Liu Peng, Choo Kim-Kwang Raymond, Wang Lizhe, and Huang Fang. 2017. SVM or deep learning? A comparative study on remote sensing image classification. Soft Comput. 21, 23 (2017), 70537065.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. [4] Liu Peng, Di Liping, Du Qian, and Wang Lizhe. 2018. Remote sensing big data: Theory, methods and applications. Remote Sensing 10, 5 (2018), 711.Google ScholarGoogle Scholar
  5. [5] Cohn David, Atlas Les, and Ladner Richard. 1994. Improving generalization with active learning. Mach. Learn. 15, 2 (1994), 201221.Google ScholarGoogle ScholarCross RefCross Ref
  6. [6] Settles Burr. 2009. Active Learning Literature Survey. Technical Report. University of Wisconsin—Madison Department of Computer Sciences.Google ScholarGoogle Scholar
  7. [7] Fedorov Valerii. 2010. Optimal experimental design. Wiley Interdisc. Rev.: Comput. Stat. 2, 5 (2010), 581589.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. [8] Olsson Fredrik. 2009. A literature survey of active machine learning in the context of natural language processing. https://www.ccs.neu.edu/home/vip/teach/MLcourse/4_boosting/materials/SICS-T--2009-06--SE.pdf.Google ScholarGoogle Scholar
  9. [9] Elahi Mehdi, Ricci Francesco, and Rubens Neil. 2016. A survey of active learning in collaborative filtering recommender systems. Comput. Sci. Rev. 20 (2016), 2950.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. [10] Tuia D., Volpi M., Copa L., Kanevski M., and Munoz-Mari J.. 2011. A survey of active learning algorithms for supervised remote sensing image classification. IEEE J. Select. Top. Sign. Process. 5, 3 (2011), 606617.Google ScholarGoogle ScholarCross RefCross Ref
  11. [11] Ren Pengzhen, Xiao Yun, Chang Xiaojun, Huang Po-Yao, Li Zhihui, Chen Xiaojiang, and Wang Xin. 2020. A survey of deep active learning. arXiv:2009.00236. Retrieved from https://arxiv.org/abs/2009.00236.Google ScholarGoogle Scholar
  12. [12] Schröder Christopher and Niekler Andreas. 2020. A survey of active learning for text classification using deep neural networks.arXiv:2008.07267. Retrieved from https://arxiv.org/abs/2008.07267.Google ScholarGoogle Scholar
  13. [13] Budd Samuel, Robinson Emma C., and Kainz Bernhard. 2021. A survey on active learning and human-in-the-loop deep learning for medical image analysis. Medical Image Analysis 71 (2021), 102062.Google ScholarGoogle Scholar
  14. [14] Yang Yazhou and Loog Marco. 2018. A benchmark and comparison of active learning for logistic regression. Pattern Recogn. 83 (2018), 401415. https://www.sciencedirect.com/science/article/pii/S0031320318302140.Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. [15] Sourati Jamshid, Gholipour Ali, Dy Jennifer G., Tomas-Fernandez Xavier, Kurugol Sila, and Warfield Simon K.. 2019. Intelligent labeling based on fisher information for medical image segmentation using deep learning. IEEE Trans. Med. Imag. 38, 11 (2019), 26422653.Google ScholarGoogle ScholarCross RefCross Ref
  16. [16] Ash Jordan T. and Adams Ryan P.. 2019. On the difficulty of warm-starting neural network training. arxiv:1910.08475. Retrieved from http://arxiv.org/abs/1910.08475.Google ScholarGoogle Scholar
  17. [17] Yuan Jin, Hou Xingxing, Xiao Yaoqiang, Cao Da, Guan Weili, and Nie Liqiang. 2019. Multi-criteria active deep learning for image classification. Knowl. Bas. Syst. 172 (2019), 8694. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. [18] Liu P., Zhang H., and Eom K. B.. 2017. Active deep learning for classification of hyperspectral images. IEEE J. Select. Top. Appl. Earth Observ. Remote Sens. 10, 2 (February 2017), 712724.Google ScholarGoogle ScholarCross RefCross Ref
  19. [19] Lv Xiaoming, Duan Fajie, Jiang Jia-Jia, Fu Xiao, and Gan Lin. 2020. Deep active learning for surface defect detection. Sensors 20, 6 (2020).Google ScholarGoogle ScholarCross RefCross Ref
  20. [20] Gal Yarin, Islam Riashat, and Ghahramani Zoubin. 2017. Deep bayesian active learning with image data. arXiv:1703.02910. Retrieved from https://arxiv.org/abs/1703.02910.Google ScholarGoogle Scholar
  21. [21] Yoo Donggeun and Kweon In So. 2019. Learning loss for active learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’19). 93102.Google ScholarGoogle ScholarCross RefCross Ref
  22. [22] Woodward Mark and Finn Chelsea. 2017. Active one-shot learning. arXiv:1702.06559. Retrieved from http://arxiv.org/abs/1702.06559.Google ScholarGoogle Scholar
  23. [23] Pang Kunkun, Dong Mingzhi, Wu Yang, and Hospedales Timothy M.. 2018. Meta-learning transferable active learning policies by deep reinforcement learning. arxXiv:1806.04798. Retrieved from http://arxiv.org/abs/1806.04798.Google ScholarGoogle Scholar
  24. [24] Ravi Sachin and Larochelle Hugo. 2018. Meta-learning for batch mode active learning. In Proceedings of the 6th International Conference on Learning Representations (ICLR’18). OpenReview.net.Google ScholarGoogle Scholar
  25. [25] Contardo Gabriella, Denoyer Ludovic, and Artières Thierry. 2017. A meta-learning approach to one-step active learning. arXiv:1706.08334. Retrieved from http://arxiv.org/abs/1706.08334.Google ScholarGoogle Scholar
  26. [26] Zhu Jia-Jie and Bento José. 2017. Generative adversarial active learning. arXiv:1702.07956. Retrieved from http://arxiv.org/abs/1702.07956.Google ScholarGoogle Scholar
  27. [27] Mayer Christoph and Timofte Radu. 2020. Adversarial sampling for active learning. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision. 30713079.Google ScholarGoogle ScholarCross RefCross Ref
  28. [28] Ducoffe Melanie and Precioso Frederic. 2018. Adversarial active learning for deep networks: A margin based approach. arXiv:1802.09841. Retrieved from https://arxiv.org/abs/1802.09841.Google ScholarGoogle Scholar
  29. [29] Joshi Ajay J., Porikli Fatih, and Papanikolopoulos Nikolaos. 2009. Multi-class active learning for image classification. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’09). IEEE Computer Society, 23722379. Google ScholarGoogle ScholarCross RefCross Ref
  30. [30] Ruzicka Vít, D’Aronco Stefano, Wegner Jan Dirk, and Schindler Konrad. 2020. Deep active learning in remote sensing for data efficient change detection. CoRR abs/2008.11201.Google ScholarGoogle Scholar
  31. [31] Liu Buyu and Ferrari Vittorio. 2017. Active learning for human pose estimation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’17). IEEE Computer Society, 43734382.Google ScholarGoogle ScholarCross RefCross Ref
  32. [32] Kee Seho, Castillo Enrique del, and Runger George. 2018. Query-by-committee improvement with diversity and density in batch active learning. Inf. Sci. 454-455 (2018), 401418.Google ScholarGoogle ScholarCross RefCross Ref
  33. [33] Cao Xiangyong, Yao Jing, Xu Zongben, and Meng Deyu. 2020. Hyperspectral image classification with convolutional neural network and active learning. IEEE Trans. Geosci. Remote Sens. (2020), 113. Google ScholarGoogle ScholarCross RefCross Ref
  34. [34] Joo G. and Kim C.. 2018. MIDAS: Model-independent training data selection under cost constraints. IEEE Access 6 (2018), 7446274474.Google ScholarGoogle ScholarCross RefCross Ref
  35. [35] Wu Yuhao, Fang Yuzhou, Shang Shuaikang, Jin Jing, Wei Lai, and Wang Haizhou. 2021. A novel framework for detecting social bots with deep neural networks and active learning. Knowl.-Bas. Syst. 211 (2021), 106525.Google ScholarGoogle ScholarCross RefCross Ref
  36. [36] Peng Peng, Zhang Wenjia, Zhang Yi, Xu Yanyan, Wang Hongwei, and Zhang Heming. 2020. Cost sensitive active learning using bidirectional gated recurrent neural networks for imbalanced fault diagnosis. Neurocomputing 407 (2020), 232245.Google ScholarGoogle ScholarCross RefCross Ref
  37. [37] Deng Cheng, Xue Yumeng, Liu Xianglong, Li Chao, and Tao Dacheng. 2019. Active transfer learning network: A unified deep joint spectral-spatial feature learning model for hyperspectral image classification. IEEE Trans. Geosci. Remote. Sens. 57, 3 (2019), 17411754. Google ScholarGoogle ScholarCross RefCross Ref
  38. [38] Xu J., Xiang L., Liu Q., Gilmore H., Wu J., Tang J., and Madabhushi A.. 2016. Stacked sparse autoencoder (SSAE) for nuclei detection on breast cancer histopathology images. IEEE Trans. Med. Imag. 35, 1 (2016), 119130.Google ScholarGoogle ScholarCross RefCross Ref
  39. [39] Tian Yan and al Guohua Cheng et. 2020. Joint temporal context exploitation and active learning for video segmentation. Pattern Recogn. 100 (2020), 107158.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. [40] Ksieniewicz Pawe, Woniak Micha, Cyganek Bogusaw, Kasprzak Andrzej, and Walkowiak Krzysztof. 2019. Data stream classification using active learned neural networks. Neurocomputing 353 (2019), 7482. Recent Advancements in Hybrid Artificial Intelligence Systems.Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. [41] Moosavi-Dezfooli Seyed-Mohsen, Fawzi Alhussein, and Frossard Pascal. 2016. DeepFool: A simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). IEEE Computer Society, 25742582. Google ScholarGoogle ScholarCross RefCross Ref
  42. [42] Sener Ozan and Savarese Silvio. 2017. A geometric approach to active learning for convolutional neural networks. arXiv: abs/1708.00489. Retrieved from https://arxiv.org/abs/1708.00489.Google ScholarGoogle Scholar
  43. [43] Sener Ozan and Savarese Silvio. 2017. Active learning for convolutional neural networks: A core-set approach. arXiv:1708.00489. Retrieved from https://arxiv.org/abs/1708.00489.Google ScholarGoogle Scholar
  44. [44] Munjal Prateek, Hayat Nasir, Hayat Munawar, Sourati Jamshid, and Khan Shadab. 2020. Towards robust and reproducible active learning using neural networks (unpublished).Google ScholarGoogle Scholar
  45. [45] Fisher R. A.. 1992. On the Mathematical Foundations of Theoretical Statistics. Springer, New York, NY, 1144.Google ScholarGoogle Scholar
  46. [46] Fukumizu Kenji. 2000. Statistical active learning in multilayer perceptrons. IEEE Trans. Neural Netw. Learn. Syst. 11, 1 (2000), 1726.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. [47] Zhang Tong. 2000. The value of unlabeled data for classification problems. In Proceedings of the 17th International Conference on Machine Learning. Morgan Kaufmann, 11911198.Google ScholarGoogle Scholar
  48. [48] Settles Burr, Craven Mark, and Ray Soumya. 2008. Multiple-instance active learning. In Advances in Neural Information Processing Systems. MIT Press, 12891296.Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. [49] Hoi Steven C. H., Jin Rong, and Lyu Michael R.. 2009. Batch mode active learning with applications to text categorization and image retrieval. IEEE Trans. Knowl. Data Eng. 21, 9 (2009), 12331248.Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. [50] Chaudhuri Kamalika, Kakade Sham M., and al Praneeth Netrapalli, et. 2015. Convergence rates of active learning for maximum likelihood estimation. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems, Cortes Corinna, Lawrence Neil D., and al Daniel D. Lee, et. (Eds.). 10901098.Google ScholarGoogle Scholar
  51. [51] Sourati Jamshid, Akçakaya Murat, and al Todd K. Leen, et. 2017. Asymptotic analysis of objectives based on fisher information in active learning. J. Mach. Learn. Res. 18 (2017), 34:1–34:41.Google ScholarGoogle Scholar
  52. [52] Zhang Ye and al Matthew Lease, et. 2017. Active discriminative text representation learning. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, Singh Satinder P. and Markovitch Shaul (Eds.). AAAI Press, 33863392.Google ScholarGoogle ScholarCross RefCross Ref
  53. [53] Ash Jordan T., Zhang Chicheng, Krishnamurthy Akshay, Langford John, and Agarwal Alekh. 2019. Deep batch active learning by diverse, uncertain gradient lower bounds. arXiv:1906.03671. Retrieved from http://arxiv.org/abs/1906.03671.Google ScholarGoogle Scholar
  54. [54] Chang Chin-Chun and Lin Po-Yi. 2015. Active learning for semi-supervised clustering based on locally linear propagation reconstruction. Neural Netw. 63 (2015), 170184.Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. [55] Zhang Lijun, Chen Chun, and al Jiajun Bu, et. 2011. Active learning based on locally linear reconstruction. IEEE Trans. Pattern Anal. Mach. Intell. 33, 10 (2011), 20262038.Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. [56] Simoni Oriane, Budnik Mateusz, Avrithis Yannis, and Gravier Guillaume. 2019. Rethinking deep active learning: Using unlabeled data at model training. arXiv:cs.CV/1911.08177. Retrieved from https://arxiv.org/abs/1911.08177.Google ScholarGoogle Scholar
  57. [57] McCallum Andrew and Nigam Kamal. 1998. Employing EM and pool-based active learning for text classification. In Proceedings of the 15th International Conference on Machine Learning (ICML’98). Morgan Kaufmann, San Francisco, CA, 350358.Google ScholarGoogle Scholar
  58. [58] Yang Yazhou and Loog Marco. 2019. Single shot active learning using pseudo annotators. Pattern Recogn. 89 (2019), 2231.Google ScholarGoogle ScholarCross RefCross Ref
  59. [59] Yang Lin, Zhang Yizhe, Chen Jianxu, Zhang Siyuan, and Chen Danny Z.. 2017. Suggestive annotation: A deep active learning framework for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-assisted Intervention. Springer, 399407.Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. [60] Long Jonathan, Shelhamer Evan, and Darrell Trevor. 2015. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’15). IEEE Computer Society, 34313440.Google ScholarGoogle ScholarCross RefCross Ref
  61. [61] Zhong Ping, Gong Zhiqiang, Li Shutao, and Schönlieb Carola-Bibiane. 2017. Learning to diversify deep belief networks for hyperspectral image classification. IEEE Trans. Geosci. Remote. Sens. 55, 6 (2017), 35163530.Google ScholarGoogle ScholarCross RefCross Ref
  62. [62] Cao Xiaofeng. 2020. A divide-and-conquer approach to geometric sampling for active learning. Expert Syst. Appl. 140 (2020).Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. [63] Ruiz P. and al J. Mateos, et. 2014. Bayesian active remote sensing image classification. IEEE Trans. Geosci. Remote Sens. 52, 4 (2014), 21862196.Google ScholarGoogle ScholarCross RefCross Ref
  64. [64] Sun S., Zhong P., Xiao H., and Wang R.. 2015. An MRF model-based active learning framework for the spectral-spatial classification of hyperspectral imagery. IEEE J. Select. Top. Sign. Process. 9, 6 (2015), 10741088.Google ScholarGoogle ScholarCross RefCross Ref
  65. [65] Haut J. M., Paoletti M. E., Plaza J., Li J., and Plaza A.. 2018. Active learning with convolutional neural networks for hyperspectral image classification using a new bayesian approach. IEEE Trans. Geosci. Remote Sens. 56, 11 (Nov 2018), 64406461.Google ScholarGoogle ScholarCross RefCross Ref
  66. [66] Blundell Charles, Cornebise Julien, Kavukcuoglu Koray, and Wierstra Daan. 2015. Weight uncertainty in neural networks. arXiv:1505.05424. Retrieved from https://arxiv.org/abs/1505.05424.Google ScholarGoogle Scholar
  67. [67] Pinsler Robert, Gordon Jonathan, Nalisnick Eric T., and Hernández-Lobato José Miguel. 2019. Bayesian batch active learning as sparse subset approximation. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems (NeurIPS’19), Wallach Hanna M., Larochelle Hugo, Beygelzimer Alina, d’Alché-Buc Florence, Fox Emily B., and Garnett Roman (Eds.). 63566367.Google ScholarGoogle Scholar
  68. [68] Gordon Jonathan and Hernndez-Lobato Jos Miguel. 2020. Combining deep generative and discriminative models for bayesian semi-supervised learning. Pattern Recogn. 100 (2020), 107156.Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. [69] Ozdemir Firat and al Zixuan Peng, et. 2020. Active learning for segmentation based on bayesian sample queries. Knowl.-Bas. Syst. (2020), 106531.Google ScholarGoogle Scholar
  70. [70] Shu Wen, Liu Peng, He Guojin, and Wang Guizhou. 2019. Hyperspectral image classification using spectral-spatial features with informative samples. IEEE Access 7 (2019), 2086920878.Google ScholarGoogle ScholarCross RefCross Ref
  71. [71] Gao Mingfei, Zhang Zizhao, Yu Guo, Arik Sercan O., Davis Larry S., and Pfister Tomas. 2020. Consistency-based semi-supervised active learning: Towards minimizing labeling cost. arXiv:cs.LG/1910.07153. Retrieved from https://arxiv.org/abs/1910.07153.Google ScholarGoogle Scholar
  72. [72] Muñoz David and al Camilo Narváez, et. 2020. Incremental learning model inspired in rehearsal for deep convolutional networks. Knowl.-Bas. Syst. 208 (2020), 106460.Google ScholarGoogle ScholarCross RefCross Ref
  73. [73] Coletta Luiz F. S., Ponti Moacir, Hruschka Eduardo R., Acharya Ayan, and Ghosh Joydeep. 2019. Combining clustering and active learning for the detection and learning of new image classes. Neurocomputing 358 (2019), 150165.Google ScholarGoogle ScholarDigital LibraryDigital Library
  74. [74] Das Soumi, Mandal Sayan, Bhoyar Ashwin, Bharde Madhumita, Ganguly Niloy, Bhattacharya Suparna, and Bhattacharya Sourangshu. 2020. Multi-criteria online frame-subset selection for autonomous vehicle videos. Pattern Recogn. Lett. 133 (2020), 349355.Google ScholarGoogle ScholarCross RefCross Ref
  75. [75] Yin Tianxiang, Liu Ningzhong, and Sun Han. 2020. Self-paced active learning for deep CNNs via effective loss function. Neurocomputing (2020).Google ScholarGoogle Scholar
  76. [76] Matiz Sergio and Barner Kenneth E.. 2019. Inductive conformal predictor for convolutional neural networks: Applications to active learning for image classification. Pattern Recogn. 90 (2019), 172182.Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. [77] Beluch William H., Genewein Tim, Nürnberger Andreas, and Köhler Jan M.. 2018. The power of ensembles for active learning in image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 93689377.Google ScholarGoogle ScholarCross RefCross Ref
  78. [78] LeCun Yann, Cortes Corinna, and Burges Christopher J. C.. 2017. The Mnist Database of Handwritten Digits. Technical Report. Retrieved from http://yann.lecun.com/exdb/mnist.Google ScholarGoogle Scholar
  79. [79] Krizhevsky Alex. 2009. Learning Multiple Layers of Features from Tiny Images. Technical Report.Google ScholarGoogle Scholar
  80. [80] Shui Changjian, al Fan Zhou, et. 2020. Deep active learning: Unified and principled method for query and training. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS’20), Chiappa Silvia and Calandra Roberto (Eds.), Vol. 108. PMLR, 13081318.Google ScholarGoogle Scholar
  81. [81] Hospedales Timothy M., Antoniou Antreas, Micaelli Paul, and Storkey Amos J.. 2020. Meta-learning in neural networks: A survey. CoRR abs/2004.05439.Google ScholarGoogle Scholar
  82. [82] Chen Li, Huang Honglan, Feng Yanghe, Cheng Guangquan, Huang Jincai, and Liu Zhong. 2020. Active one-shot learning by a deep q-network strategy. Neurocomputing 383 (2020), 324335.Google ScholarGoogle ScholarDigital LibraryDigital Library
  83. [83] Bachman Philip, Sordoni Alessandro, and Trischler Adam. 2017. Learning algorithms for active learning. In Proceedings of the 34th International Conference on Machine Learning (ICML’17), Precup Doina and Teh Yee Whye (Eds.), Vol. 70. PMLR, 301310.Google ScholarGoogle Scholar
  84. [84] Vinyals Oriol, al Charles Blundell, et. 2016. Matching networks for one shot learning. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems, Lee Daniel D., Sugiyama Masashi, Luxburg Ulrike von, Guyon Isabelle, and Garnett Roman (Eds.). 36303638.Google ScholarGoogle Scholar
  85. [85] Zagoruyko S. and Komodakis N.. 2015. Learning to compare image patches via convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’15). 43534361.Google ScholarGoogle ScholarCross RefCross Ref
  86. [86] Koch Gregory R.. 2015. Siamese neural networks for one-shot image recognition.Google ScholarGoogle Scholar
  87. [87] Sung F., al Y. Yang, et. 2018. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11991208.Google ScholarGoogle ScholarCross RefCross Ref
  88. [88] Ravi Sachin and Larochelle Hugo. 2017. Optimization as a model for few-shot learning. In Proceedings of the 5th International Conference on Learning Representations (ICLR’17). OpenReview.net.Google ScholarGoogle Scholar
  89. [89] Zhang Luo, Liu Peng, Zhao Lei, Wang Guizhou, Zhang Wangfeng, and Liu Jianbo. 2021. Air quality predictions with a semi-supervised bidirectional LSTM neural network. Atmos. Poll. Res. 12, 1 (2021), 328339.Google ScholarGoogle ScholarCross RefCross Ref
  90. [90] Konyushkova Ksenia, al Raphael Sznitman, et. 2017. Learning active learning from data. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems, Guyon Isabelle and al Ulrike von Luxburg, et (Eds.). 42254235.Google ScholarGoogle Scholar
  91. [91] Santoro Adam, Bartunov Sergey, Botvinick Matthew, Wierstra Daan, and Lillicrap Timothy P.. 2016. Meta-learning with memory-augmented neural networks. In Proceedings of the 33nd International Conference on Machine Learning (ICML’16), Balcan Maria-Florina and Weinberger Kilian Q. (Eds.), Vol. 48. 18421850.Google ScholarGoogle Scholar
  92. [92] Kvistad Andreas, Ruocco Massimiliano, Silva Eliezer de Souza da, and Aune Erlend. 2019. Augmented memory networks for streaming-based active one-shot learning. arXiv:1909.01757. Retrieved from http://arxiv.org/abs/1909.01757.Google ScholarGoogle Scholar
  93. [93] Sutton Richard Stuart. 1984. Temporal credit assignment in reinforcement learning.Google ScholarGoogle Scholar
  94. [94] Fang Meng, Li Yuan, and Cohn Trevor. 2017. Learning how to active learn: A deep reinforcement learning approach. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’17), Palmer Martha, Hwa Rebecca, and Riedel Sebastian (Eds.). Association for Computational Linguistics, 595605. Google ScholarGoogle ScholarCross RefCross Ref
  95. [95] Mnih Volodymyr, Kavukcuoglu Koray, al David Silver, et. 2015. Human-level control through deep reinforcement learning. Nature 518, 7540 (2015), 529533.Google ScholarGoogle ScholarCross RefCross Ref
  96. [96] Mnih Volodymyr, Badia Adria Puigdomenech, Mirza Mehdi, Graves Alex, Lillicrap Timothy, Harley Tim, Silver David, and Kavukcuoglu Koray. 2016. Asynchronous methods for deep reinforcement learning. In Proceedings of the International Conference on Machine Learning. PMLR, 19281937.Google ScholarGoogle ScholarDigital LibraryDigital Library
  97. [97] Mnih Volodymyr, Kavukcuoglu Koray, Silver David, Graves Alex, Antonoglou Ioannis, Wierstra Daan, and Riedmiller Martin. 2013. Playing atari with deep reinforcement learning. arXiv:1312.5602. Retrieved from https://arxiv.org/abs/1312.5602.Google ScholarGoogle Scholar
  98. [98] Wang Ziyu, Schaul Tom, Hessel Matteo, Hasselt Hado, Lanctot Marc, and Freitas Nando. 2016. Dueling network architectures for deep reinforcement learning. In Proceedings of the International Conference on Machine Learning. PMLR, 19952003.Google ScholarGoogle ScholarDigital LibraryDigital Library
  99. [99] Huang Honglan, Feng Yanghe, Huang Jincai, Zhang Jiarui, and Chen Li. 2019. A reinforcement one-shot active learning approach for aircraft type recognition. IEEE Access 7 (2019), 147204147214.Google ScholarGoogle ScholarCross RefCross Ref
  100. [100] Romero Adriana, al Pierre Luc Carrier, et. 2017. Diet networks: Thin parameters for fat genomics. In Proceedings of the 5th International Conference on Learning Representations (ICLR’17). OpenReview.net.Google ScholarGoogle Scholar
  101. [101] Budnarain Paul, Junior Renato Ferreira Pinto, and Kogan Ilan. 2019. RadGrad: Active learning with loss gradients.arXiv:1906.07838. Retrieved from http://arxiv.org/abs/1906.07838.Google ScholarGoogle Scholar
  102. [102] Fang Meng, Li Yuan, and Cohn Trevor. 2017. Learning how to active learn: A deep reinforcement learning approach. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP’17), Palmer Martha, Hwa Rebecca, and Riedel Sebastian (Eds.). Association for Computational Linguistics, 595605.Google ScholarGoogle ScholarCross RefCross Ref
  103. [103] Dean Sarah, Mania Horia, Matni Nikolai, Recht Benjamin, and Tu Stephen. 2020. On the sample complexity of the linear quadratic regulator. Found. Comput. Math. 20, 4 (2020), 633679.Google ScholarGoogle ScholarDigital LibraryDigital Library
  104. [104] Racanière Sébastien, al Theophane Weber, et. 2017. Imagination-augmented agents for deep reinforcement learning. In Annual Conference on Neural Information Processing Systems (NIPS’17), Guyon Isabelle, Luxburg Ulrike von, and al Samy Bengio, et (Eds.). 56905701.Google ScholarGoogle Scholar
  105. [105] Buckman Jacob, Hafner Danijar, Tucker George, Brevdo Eugene, and Lee Honglak. 2018. Sample-efficient reinforcement learning with stochastic ensemble value expansion. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems (NeurIPS’18), Bengio Samy, Wallach Hanna M., Larochelle Hugo, Grauman Kristen, Cesa-Bianchi Nicolò, and Garnett Roman (Eds.). 82348244.Google ScholarGoogle Scholar
  106. [106] Feinberg Vladimir, al Alvin Wan et. 2018. Model-based value estimation for efficient model-free reinforcement learning. arXiv:0803.00101. Retrieved from http://arxiv.org/abs/1803.00101.Google ScholarGoogle Scholar
  107. [107] Gu Shixiang, Lillicrap Timothy P., Sutskever Ilya, and Levine Sergey. 2016. Continuous deep q-learning with model-based acceleration. In Proceedings of the 33nd International Conference on Machine Learning (ICML’16), Balcan Maria-Florina and Weinberger Kilian Q. (Eds.), Vol. 48. JMLR.org, 28292838.Google ScholarGoogle Scholar
  108. [108] Kurutach Thanard, Clavera Ignasi, Duan Yan, Tamar Aviv, and Abbeel Pieter. 2018. Model-ensemble trust-region policy optimization. In Proceedings of the 6th International Conference on Learning Representations (ICLR’18). OpenReview.net.Google ScholarGoogle Scholar
  109. [109] Mahapatra Dwarikanath, Bozorgtabar Behzad, Thiran Jean-Philippe, and Reyes Mauricio. 2018. Efficient active learning for image classification and segmentation using a sample selection and conditional generative adversarial network. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 580588.Google ScholarGoogle ScholarCross RefCross Ref
  110. [110] Zhang Xiao-Yu, Shi Haichao, Zhu Xiaobin, and Li Peng. 2019. Active semi-supervised learning based on self-expressive correlation with generative adversarial networks. Neurocomputing 345 (2019), 103113.Google ScholarGoogle ScholarDigital LibraryDigital Library
  111. [111] Huijser Miriam W. and Gemert Jan C. van. 2017. Active decision boundary annotation with deep generative models. In Proceedings of the IEEE International Conference on Computer Vision (ICCV’17). IEEE Computer Society, 52965305.Google ScholarGoogle ScholarCross RefCross Ref
  112. [112] Tran Toan, Do Thanh-Toan, Reid Ian D., and Carneiro Gustavo. 2019. Bayesian generative active deep learning. In Proceedings of the 36th International Conference on Machine Learning (ICML’19), Chaudhuri Kamalika and Salakhutdinov Ruslan (Eds.), Vol. 97. PMLR, 62956304.Google ScholarGoogle Scholar
  113. [113] Shi Xueying, al Qi Dou, et. 2019. An active learning approach for reducing annotation cost in skin lesion analysis. In Proceedings of the 10th International Workshop on Machine Learning in Medical Imaging (MLMI’19), Suk Heung-Il and al Mingxia Liu et (Eds.), Vol. 11861. Springer, 628636.Google ScholarGoogle ScholarDigital LibraryDigital Library
  114. [114] Codella Noel C. F., Rotemberg Veronica, Tschandl Philipp, Celebi M. Emre, Dusza Stephen W., Gutman David, Helba Brian, Kalloo Aadi, Liopyris Konstantinos, Marchetti Michael A., Kittler Harald, and Halpern Allan. 2019. Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (ISIC). CoRR abs/1902.03368.Google ScholarGoogle Scholar
  115. [115] Mottaghi Ali and Yeung Serena. 2019. Adversarial representation active learning. arXiv:cs.CV/1912.09720. Retrieved from https://arxiv.org/abs/1912.09720.Google ScholarGoogle Scholar
  116. [116] Gissin Daniel and Shalev-Shwartz Shai. 2019. Discriminative active learning. arXiv:1907.06347. Retrieved from https://arxiv.org/abs/1907.06347.Google ScholarGoogle Scholar
  117. [117] Sinha Samarth, Ebrahimi Sayna, and Darrell Trevor. 2019. Variational adversarial active learning. arXiv:1904.00370. Retrieved from http://arxiv.org/abs/1904.00370.Google ScholarGoogle Scholar
  118. [118] Kim Kwan-Young, Park Dongwon, Kim Kwang In, and Chun Se Young. 2020. Task-aware variational adversarial active learning.arxiv:2002.04709. Retrieved from https://arxiv.org/abs/2002.04709.Google ScholarGoogle Scholar
  119. [119] Moosavi-Dezfooli Seyed-Mohsen, Fawzi Alhussein, and Frossard Pascal. 2016. DeepFool: A simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). IEEE Computer Society, 25742582.Google ScholarGoogle ScholarCross RefCross Ref
  120. [120] Saquil Yassir, Kim Kwang In, and Hall Peter M.. 2018. Ranking CGANs: Subjective control over semantic image attributes. In Proceedings of the British Machine Vision Conference BMVC’18)). BMVA Press, 131.Google ScholarGoogle Scholar
  121. [121] Lei Zhao, Zeng Yi, Liu Peng, and Su Xiaohui. 2021. Active deep learning for hyperspectral image classification with uncertainty learning. IEEE Geosci. Remote Sens. Lett. (2021).Google ScholarGoogle Scholar
  122. [122] Menze B. H., Jakab A., and etc. S. Bauer2015. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imag. 34, 10 (2015), 19932024.Google ScholarGoogle ScholarCross RefCross Ref
  123. [123] Pan Sinno Jialin and Yang Qiang. 2010. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22, 10 (2010), 13451359.Google ScholarGoogle ScholarDigital LibraryDigital Library
  124. [124] Weiss Karl R., Khoshgoftaar Taghi M., and Wang Dingding. 2016. A survey of transfer learning. J. Big Data 3 (2016), 9.Google ScholarGoogle ScholarCross RefCross Ref
  125. [125] Hospedales Timothy M., Antoniou Antreas, Micaelli Paul, and Storkey Amos J.. 2020. Meta-learning in neural networks: A survey. CoRR abs/2004.05439.Google ScholarGoogle Scholar
  126. [126] Ren Pengzhen, Xiao Yun, al Xiaojun Chang, et. 2020. A comprehensive survey of neural architecture search: Challenges and solutions. CoRR abs/2006.02903.Google ScholarGoogle Scholar
  127. [127] Kaya Mahmut and Bilge Hasan Sakir. 2019. Deep metric learning: A survey. Symmetry 11, 9 (2019), 1066.Google ScholarGoogle ScholarCross RefCross Ref
  128. [128] Wang Yaqing, Yao Quanming, Kwok James T., and Ni Lionel M.. 2020. Generalizing from a few examples: A survey on few-shot learning. ACM Comput. Surv. 53, 3 (2020), 63:1–63:34.Google ScholarGoogle Scholar
  129. [129] Zhou Zhi-Hua. 2018. A brief introduction to weakly supervised learning. Natl. Sci. Rev. 5, 1 (2018), 4453.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. A Survey on Active Deep Learning: From Model Driven to Data Driven

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM Computing Surveys
      ACM Computing Surveys  Volume 54, Issue 10s
      January 2022
      831 pages
      ISSN:0360-0300
      EISSN:1557-7341
      DOI:10.1145/3551649
      Issue’s Table of Contents

      Publication rights licensed to ACM. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of a national government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 13 September 2022
      • Online AM: 23 March 2022
      • Accepted: 3 January 2022
      • Revised: 27 November 2021
      • Received: 29 May 2021
      Published in csur Volume 54, Issue 10s

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • survey
      • Refereed

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    View Full Text

    HTML Format

    View this article in HTML Format .

    View HTML Format