comparison and analysis of deep audio embeddings for music emotion recognition

Eunjeong Koh, Shlomo Dubnov

Research output: Contribution to journalConference articlepeer-review

7 Scopus citations

Abstract

Emotion is a complicated notion present in music that is hard to capture even with fine-tuned feature engineering. In this paper, we investigate the utility of state-of-the-art pre-trained deep audio embedding methods to be used in the Music Emotion Recognition (MER) task. Deep audio embedding methods allow us to efficiently capture the high dimensional features into a compact representation. We implement several multi-class classifiers with deep audio embeddings to predict emotion semantics in music. We investigate the effectiveness of L3-Net and VGGish deep audio embedding methods for music emotion inference over four music datasets. The experiments with several classifiers on the task show that the deep audio embedding solutions can improve the performances of the previous baseline MER models. We conclude that deep audio embeddings represent musical emotion semantics for the MER task without expert human engineering.

Original languageEnglish
Pages (from-to)15-22
Number of pages8
JournalCEUR Workshop Proceedings
Volume2897
StatePublished - 1 Jan 2021
Externally publishedYes
Event4th Workshop on Affective Content Analysis, AffCon 2021 - Virtual, Online
Duration: 9 Feb 2021 → …

ASJC Scopus subject areas

  • General Computer Science

Fingerprint

Dive into the research topics of 'comparison and analysis of deep audio embeddings for music emotion recognition'. Together they form a unique fingerprint.

Cite this