DiffSED: Sound Event Detection with Denoising Diffusion

Authors

  • Swapnil Bhosale University of Surrey
  • Sauradip Nag University of Surrey
  • Diptesh Kanojia University of Surrey
  • Jiankang Deng Imperial College London
  • Xiatian Zhu University of Surrey

DOI:

https://doi.org/10.1609/aaai.v38i2.27837

Keywords:

CV: Scene Analysis & Understanding, ML: Deep Generative Models & Autoencoders, ML: Multimodal Learning

Abstract

Sound Event Detection (SED) aims to predict the temporal boundaries of all the events of interest and their class labels, given an unconstrained audio sample. Taking either the split-and-classify (i.e., frame-level) strategy or the more principled event-level modeling approach, all existing methods consider the SED problem from the discriminative learning perspective. In this work, we reformulate the SED problem by taking a generative learning perspective. Specifically, we aim to generate sound temporal boundaries from noisy proposals in a denoising diffusion process, conditioned on a target audio sample. During training, our model learns to reverse the noising process by converting noisy latent queries to the ground-truth versions in the elegant Transformer decoder framework. Doing so enables the model generate accurate event boundaries from even noisy queries during inference. Extensive experiments on the Urban-SED and EPIC-Sounds datasets demonstrate that our model significantly outperforms existing alternatives, with 40+% faster convergence in training. Code: https://github.com/Surrey-UPLab/DiffSED

Published

2024-03-24

How to Cite

Bhosale, S., Nag, S., Kanojia, D., Deng, J., & Zhu, X. (2024). DiffSED: Sound Event Detection with Denoising Diffusion. Proceedings of the AAAI Conference on Artificial Intelligence, 38(2), 792-800. https://doi.org/10.1609/aaai.v38i2.27837

Issue

Section

AAAI Technical Track on Computer Vision I