EGU24-19546, updated on 11 Mar 2024
https://doi.org/10.5194/egusphere-egu24-19546
EGU General Assembly 2024
© Author(s) 2024. This work is distributed under
the Creative Commons Attribution 4.0 License.

Decoding Ocean Depths: Machine Learning Insights into Tidal Dynamics with UPFLOW’s OBS Data

Carlos Corela1, Alex Saoulis2,3, Maria Tsekhmistrenko3,4, Afonso Loureiro5,1, Miguel Miranda6,7, and Ana Ferreira3
Carlos Corela et al.
  • 1University of Lisbon, Institute D. Luiz, Lisbon, Portugal (ccorela@fc.ul.pt)
  • 2Fathom, Bristol, United Kingdom
  • 3University College of London, London, United Kingdom
  • 4ERP, Earth Rover Program, London, United Kingdom
  • 5ARDITI - Regional Agency for the Development of Research, Technology and Innovation, Funchal, Portugal
  • 6IPMA - Portuguese Institute of the Sea and Atmosphere, Lisbon, Portugal
  • 7AIR Centre, Atlantic International Research Centre, Terceira, Azores, Portugal

The iReverb project explores technical methodologies employed in the UPFLOW project, which addresses the challenges associated with tidal-induced noise on Ocean Bottom Seismometers (OBS) deployed in the North Atlantic Ocean. OBS sensors, unlike those on land stations, are exposed to oceanic currents, whose coupling with the instruments induces reverberations in the recordings. This project showcases a method for exploration of reverberations of tidally-modulated current-induced noise across different OBS types around the Azores, Madeira and Canaries region. The overarching objective is to present a comprehensive technical framework to enhance our understanding of Ocean Bottom Circulation (OBC), providing insights for calibrating current models. A central aspect of this project is the proposed utilisation of machine learning (ML) algorithms for automating the mapping of resonances and obtaining a proxy for OBC.

Here, we present the iReverb methodology, involving manual curation of a pixel-level annotated spectrogram dataset, after this, a ML classifier is trained to identify features in each spectrogram (known as supervised semantic segmentation). To begin with, 15-minute spectrograms between 1-20 Hz are manually annotated using an open-source labelling tool. A deep Convolutional Neural Network (CNN), consisting of a ResNet Encoder, a UNet decoder, and a pixel-wise classification head, is then trained to classify features in each spectrogram using supervised learning. We explore the various factors, such as architecture changes to the CNN, that contribute to improving the model performance for annotating key features. We also discuss insights into attempted alternatives to standard supervised learning (semi-supervised and synthetic labels).

Finally, we discuss the results and evaluate the effectiveness of the ML/DL approach in mapping resonances, demonstrating its potential in utilising resonances as signals for tracking ocean currents (and whales). 

This project was funded by the UPFLOW project (ERC grant 101001601). This work was partially funded by FCT I.P./MCTES through PIDDAC (UIDB/50019/2020, UIDP/50019/2020 and LA/P/0068/2020).

How to cite: Corela, C., Saoulis, A., Tsekhmistrenko, M., Loureiro, A., Miranda, M., and Ferreira, A.: Decoding Ocean Depths: Machine Learning Insights into Tidal Dynamics with UPFLOW’s OBS Data, EGU General Assembly 2024, Vienna, Austria, 14–19 Apr 2024, EGU24-19546, https://doi.org/10.5194/egusphere-egu24-19546, 2024.