1887

Abstract

Summary

In seismic applications, the labelling is a challenging and tedious task due to the broad areas covered by the seismic data and requires expert knowledge. Consequently, finding solutions to limit the labelling effort is a priority to accelerate workflows and to optimize the human resources. The technique of active learning can help in reaching these goals. It consists in selecting the best data to label in order to improve the model performance based on an iterative approach during which, at each step, unlabeled data are chosen to be labelled and used to train the model. This process is repeated until the model reaches acceptable performances. The main challenge when incrementally training a neural network is the forgetting of the patterns learned during the previous training iterations. We showed that the choice of the old/new labels ratio in the training and validations sets, as well as the choice of the learning rate and the patience can help mitigate the knowledge loss in the case of incremental trainings.

Loading

Article metrics loading...

/content/papers/10.3997/2214-4609.202332023
2023-03-20
2024-04-28
Loading full text...

Full text loading...

References

  1. Beluch, W. H., Genewein, T., Nurnberger, A., Kohler, J. M. [2018] The Power of Ensembles for Active Learning in Image Classification, IEEE/CVF Conference on CV and PR, 9368–9377.
    [Google Scholar]
  2. Di, H., Wang, Z., AlRegib, G. [2018] Real-time seismic-image interpretation via deconvolutional neural network, SEG Technical Program Expanded Abstracts, 2051–2055.
    [Google Scholar]
  3. Di, H., Truelove, L., Abubakar, A. [2022] Automated active learning for seismic facies classification, SEG Technical Program Expanded Abstracts, 1694–1698.
    [Google Scholar]
  4. Gal, Y., Islam, R., Ghahramani, Z. [2017] Deep bayesian active learning with image data.34th ICML, Vol. 70, PMLR, 1183–1192.
    [Google Scholar]
  5. Goodfellow, I.J., Mirza, M., Xiao, D., Courville, A., Bengio, Y. [2015] An Empirical Investigation of Catastrophic Forgetting in Gradient-Based Neural Networks, Technical Report.
    [Google Scholar]
  6. Hinton, G., Vinyals, O., Dean, J. [2014] Distilling the knowledge in a neural network, NIPS Workshop, 2014
    [Google Scholar]
  7. Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., Hassabis, D. [2017] Overcoming catastrophic forgetting in neural networks, Proceedings of the national academy of sciences, 114, 3521–3526.
    [Google Scholar]
  8. Li, Z., Hoiem, D. [2018] Learning without Forgetting, IEEE Transactions on Pattern Analysis and Machine Intelligence, 40 (12), 2935–2947.
    [Google Scholar]
  9. McCloskey, M., Cohen, N. J. [1989] Catastrophic interference in connectionist networks: The sequential learning problem, Psychology of learning and motivation, 24, 109–165.
    [Google Scholar]
  10. Robins, A.V. [1995] Catastrophic Forgetting, Rehearsal and Pseudorehearsal, Connect. Sci., 7, 123–146.
    [Google Scholar]
  11. Tschannen, V., Delescluse, M., Ettrich, N., Keuper, J. [2020] Extracting horizon surfaces from 3D seismic data using deep learning, GEOPHYSICS, 85, N17–N26.
    [Google Scholar]
http://instance.metastore.ingenta.com/content/papers/10.3997/2214-4609.202332023
Loading
/content/papers/10.3997/2214-4609.202332023
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error