ISCA Archive Interspeech 2022
ISCA Archive Interspeech 2022

Text-Driven Separation of Arbitrary Sounds

Kevin Kilgour, Beat Gfeller, Qingqing Huang, Aren Jansen, Scott Wisdom, Marco Tagliasacchi

We propose a method of separating a desired sound source from a single-channel mixture, based on either a textual description or a short audio sample of the target source. This is achieved by combining two distinct models. The first model, SoundFilter, is trained to jointly embed both an audio clip and its textual description to the same embedding in a shared representation. The second model,SoundFilter, takes a mixed source audio clip as an input and separates it based on a conditioning vector from the shared text-audio representation defined by SoundWords, making the model agnostic to the conditioning modality. Evaluating on multiple datasets, we show that our approach can achieve an SI-SDR of 9.1 dB for mixtures of two arbitrary sounds when conditioned on text and 10.1 dB when conditioned on audio. We also show that SoundWords is effective at learning co-embeddings and that our multi-modal training approach improves the performance of SoundFilter


doi: 10.21437/Interspeech.2022-11052

Cite as: Kilgour, K., Gfeller, B., Huang, Q., Jansen, A., Wisdom, S., Tagliasacchi, M. (2022) Text-Driven Separation of Arbitrary Sounds. Proc. Interspeech 2022, 5403-5407, doi: 10.21437/Interspeech.2022-11052

@inproceedings{kilgour22_interspeech,
  author={Kevin Kilgour and Beat Gfeller and Qingqing Huang and Aren Jansen and Scott Wisdom and Marco Tagliasacchi},
  title={{Text-Driven Separation of Arbitrary Sounds}},
  year=2022,
  booktitle={Proc. Interspeech 2022},
  pages={5403--5407},
  doi={10.21437/Interspeech.2022-11052}
}