Effective model representation by information bottleneck principle

Ron M. Hecht, Elad Noor, Gil Dobry, Yaniv Zigel, Aharon Bar-Hillel, Naftali Tishby

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

The common approaches to feature extraction in speech processing are generative and parametric although they are highly sensitive to violations of their model assumptions. Here, we advocate the non-parametric Information Bottleneck (IB). IB is an information theoretic approach that extends minimal sufficient statistics. However, unlike minimal sufficient statistics which does not allow any relevant data loss, IB method enables a principled tradeoff between compactness and the amount of target-related information. IB's ability to improve a broad range of recognition tasks is illustrated for model dimension reduction tasks for speaker recognition and model clustering for age-group verification.

Original languageEnglish
Article number6480793
Pages (from-to)1755-1759
Number of pages5
JournalIEEE Transactions on Audio, Speech and Language Processing
Volume21
Issue number8
DOIs
StatePublished - 22 May 2013

Keywords

  • Information bottleneck method
  • information theory
  • speaker recognition
  • speech recognition

Fingerprint

Dive into the research topics of 'Effective model representation by information bottleneck principle'. Together they form a unique fingerprint.

Cite this