TY - CHAP
T1 - TOWARDS IMPROVING HARMONIC SENSITIVITY AND PREDICTION STABILITY FOR SINGING MELODY EXTRACTION
AU - Shao, Keren
AU - Chen, Ke
AU - Berg-Kirkpatrick, Taylor
AU - Dubnov, Shlomo
N1 - Publisher Copyright:
© K. Shao, K. Chen, T. Berg-Kirkpatrick, S. Dubnov.
PY - 2023/1/1
Y1 - 2023/1/1
N2 - In deep learning research, many melody extraction models rely on redesigning neural network architectures to improve performance. In this paper, we propose an input feature modification and a training objective modification based on two assumptions. First, harmonics in the spectrograms of audio data decay rapidly along the frequency axis. To enhance the model’s sensitivity on the trailing harmonics, we modify the Combined Frequency and Periodicity (CFP) representation using discrete z-transform. Second, the vocal and non-vocal segments with extremely short duration are uncommon. To ensure a more stable melody contour, we design a differentiable loss function that prevents the model from predicting such segments. We apply these modifications to several models, including MSNet, FTANet, and a newly introduced model, PianoNet, modified from a piano transcription network. Our experimental results demonstrate that the proposed modifications are empirically effective for singing melody extraction.
AB - In deep learning research, many melody extraction models rely on redesigning neural network architectures to improve performance. In this paper, we propose an input feature modification and a training objective modification based on two assumptions. First, harmonics in the spectrograms of audio data decay rapidly along the frequency axis. To enhance the model’s sensitivity on the trailing harmonics, we modify the Combined Frequency and Periodicity (CFP) representation using discrete z-transform. Second, the vocal and non-vocal segments with extremely short duration are uncommon. To ensure a more stable melody contour, we design a differentiable loss function that prevents the model from predicting such segments. We apply these modifications to several models, including MSNet, FTANet, and a newly introduced model, PianoNet, modified from a piano transcription network. Our experimental results demonstrate that the proposed modifications are empirically effective for singing melody extraction.
UR - http://www.scopus.com/inward/record.url?scp=85219247614&partnerID=8YFLogxK
U2 - 10.5281/zenodo.10265373
DO - 10.5281/zenodo.10265373
M3 - Chapter
AN - SCOPUS:85219247614
T3 - Proceedings of the International Society for Music Information Retrieval Conference
SP - 657
EP - 663
BT - Proceedings of the International Society for Music Information Retrieval Conference
PB - International Society for Music Information Retrieval
ER -