Abstract
Most existing deep learning based binaural speaker separation systems focus on producing a monaural estimate for each of the target speakers, and thus do not preserve the interaural cues, which are crucial for human listeners to perform sound localization and lateralization. In this study, we address talker-independent binaural speaker separation with interaural cues preserved in the estimated binaural signals. Specifically, we extend a newly-developed gated recurrent neural network for monaural separation by additionally incorporating self-attention mechanisms and dense connectivity. We develop an end-to-end multiple-input multiple-output system, which directly maps from the binaural waveform of the mixture to those of the speech signals. The experimental results show that our proposed approach achieves significantly better separation performance than a recent binaural separation approach. In addition, our approach effectively preserves the interaural cues, which improves the accuracy of sound localization.
Original language | English |
---|---|
Article number | 9292089 |
Pages (from-to) | 26-30 |
Number of pages | 5 |
Journal | IEEE Signal Processing Letters |
Volume | 28 |
DOIs | |
State | Published - 1 Jan 2021 |
Externally published | Yes |
Keywords
- Binaural speaker separation
- interaural cue preservation
- self-attention
- time-domain
ASJC Scopus subject areas
- Signal Processing
- Electrical and Electronic Engineering
- Applied Mathematics