Reference Hub1
Audio Source Separation using Sparse Representations

Audio Source Separation using Sparse Representations

Andrew Nesbit, Maria G. Jafar, Emmanuel Vincent, Mark D. Plumbley
Copyright: © 2011 |Pages: 20
ISBN13: 9781615209194|ISBN10: 1615209190|ISBN13 Softcover: 9781616923693|EISBN13: 9781615209200
DOI: 10.4018/978-1-61520-919-4.ch010
Cite Chapter Cite Chapter

MLA

Nesbit, Andrew, et al. "Audio Source Separation using Sparse Representations." Machine Audition: Principles, Algorithms and Systems, edited by Wenwu Wang, IGI Global, 2011, pp. 246-265. https://doi.org/10.4018/978-1-61520-919-4.ch010

APA

Nesbit, A., Jafar, M. G., Vincent, E., & Plumbley, M. D. (2011). Audio Source Separation using Sparse Representations. In W. Wang (Ed.), Machine Audition: Principles, Algorithms and Systems (pp. 246-265). IGI Global. https://doi.org/10.4018/978-1-61520-919-4.ch010

Chicago

Nesbit, Andrew, et al. "Audio Source Separation using Sparse Representations." In Machine Audition: Principles, Algorithms and Systems, edited by Wenwu Wang, 246-265. Hershey, PA: IGI Global, 2011. https://doi.org/10.4018/978-1-61520-919-4.ch010

Export Reference

Mendeley
Favorite

Abstract

The authors address the problem of audio source separation, namely, the recovery of audio signals from recordings of mixtures of those signals. The sparse component analysis framework is a powerful method for achieving this. Sparse orthogonal transforms, in which only few transform coefficients differ significantly from zero, are developed; once the signal has been transformed, energy is apportioned from each transform coefficient to each estimated source, and, finally, the signal is reconstructed using the inverse transform. The overriding aim of this chapter is to demonstrate how this framework, as exemplified here by two different decomposition methods which adapt to the signal to represent it sparsely, can be used to solve different problems in different mixing scenarios. To address the instantaneous (neither delays nor echoes) and underdetermined (more sources than mixtures) mixing model, a lapped orthogonal transform is adapted to the signal by selecting a basis from a library of predetermined bases. This method is highly related to the windowing methods used in the MPEG audio coding framework. In considering the anechoic (delays but no echoes) and determined (equal number of sources and mixtures) mixing case, a greedy adaptive transform is used based on orthogonal basis functions that are learned from the observed data, instead of being selected from a predetermined library of bases. This is found to encode the signal characteristics, by introducing a feedback system between the bases and the observed data. Experiments on mixtures of speech and music signals demonstrate that these methods give good signal approximations and separation performance, and indicate promising directions for future research.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.