TY - GEN
T1 - Spatial Upsampling of Sparse Head Related Transfer Functions - A VQ-VAE & Transformer based approach
AU - Zurale, Devansh
AU - Dubnov, Shlomo
N1 - Publisher Copyright:
© 2023 International Conference on Spatial and Immersive Audio. All Rights Reserved.
PY - 2023/1/1
Y1 - 2023/1/1
N2 - With the increasing demand for AR/VR technologies, enabling accurate reproduction of binaural spatial audio through obtaining individualized Head Related Transfer Functions (HRTFs) has become a high priority subject of research. Meanwhile, recent developments in Generative AI have been providing substantial success in several domains involving audio, language, images etc. In this work we propose a framework to use a 3D Convolutional Neural Network (CNN) based Vector-Quantized Variational AutoEncoder (VQ-VAE) to first learn a regularized latent representation from the HRTFs, which leverages both spatial and spectral correlations between neighboring magnitude HRTFs. We further use the Transformer architecture to find mappings between latent sequences derived from spatially-sparse HRTF measurements and the latent sequences defining the HRTFs having a high spatial resolution. We thereby predict HRTFs at 1440 locations given sparse HRTF measurements from 25 locations, also allowing for freedom over the sampling locations of the sparse HRTFs. We achieve a mean Log Spectral Distortion (LSD) error of 4.5 dB while also demonstrating a contrived but informative case of obtaining a mean LSD of 3 dB when evaluated over 10 validation subjects.
AB - With the increasing demand for AR/VR technologies, enabling accurate reproduction of binaural spatial audio through obtaining individualized Head Related Transfer Functions (HRTFs) has become a high priority subject of research. Meanwhile, recent developments in Generative AI have been providing substantial success in several domains involving audio, language, images etc. In this work we propose a framework to use a 3D Convolutional Neural Network (CNN) based Vector-Quantized Variational AutoEncoder (VQ-VAE) to first learn a regularized latent representation from the HRTFs, which leverages both spatial and spectral correlations between neighboring magnitude HRTFs. We further use the Transformer architecture to find mappings between latent sequences derived from spatially-sparse HRTF measurements and the latent sequences defining the HRTFs having a high spatial resolution. We thereby predict HRTFs at 1440 locations given sparse HRTF measurements from 25 locations, also allowing for freedom over the sampling locations of the sparse HRTFs. We achieve a mean Log Spectral Distortion (LSD) error of 4.5 dB while also demonstrating a contrived but informative case of obtaining a mean LSD of 3 dB when evaluated over 10 validation subjects.
UR - http://www.scopus.com/inward/record.url?scp=85173010973&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85173010973
T3 - International Conference on Spatial and Immersive Audio 2023
SP - 149
EP - 159
BT - International Conference on Spatial and Immersive Audio 2023
PB - Audio Engineering Society
T2 - International Conference on Spatial and Immersive Audio 2023
Y2 - 23 August 2023 through 25 August 2023
ER -