TOWARDS IMPROVING HARMONIC SENSITIVITY AND PREDICTION STABILITY FOR SINGING MELODY EXTRACTION

Keren Shao, Ke Chen, Taylor Berg-Kirkpatrick, Shlomo Dubnov

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

In deep learning research, many melody extraction models rely on redesigning neural network architectures to improve performance. In this paper, we propose an input feature modification and a training objective modification based on two assumptions. First, harmonics in the spectrograms of audio data decay rapidly along the frequency axis. To enhance the model’s sensitivity on the trailing harmonics, we modify the Combined Frequency and Periodicity (CFP) representation using discrete z-transform. Second, the vocal and non-vocal segments with extremely short duration are uncommon. To ensure a more stable melody contour, we design a differentiable loss function that prevents the model from predicting such segments. We apply these modifications to several models, including MSNet, FTANet, and a newly introduced model, PianoNet, modified from a piano transcription network. Our experimental results demonstrate that the proposed modifications are empirically effective for singing melody extraction.

Original languageEnglish
Title of host publicationProceedings of the International Society for Music Information Retrieval Conference
PublisherInternational Society for Music Information Retrieval
Pages657-663
Number of pages7
DOIs
StatePublished - 1 Jan 2023
Externally publishedYes

Publication series

NameProceedings of the International Society for Music Information Retrieval Conference
Volume2023
ISSN (Electronic)3006-3094

ASJC Scopus subject areas

  • Music
  • Artificial Intelligence
  • Human-Computer Interaction
  • Signal Processing

Fingerprint

Dive into the research topics of 'TOWARDS IMPROVING HARMONIC SENSITIVITY AND PREDICTION STABILITY FOR SINGING MELODY EXTRACTION'. Together they form a unique fingerprint.

Cite this