Localization of virtual sounds in dynamic listening using sparse HRTFs

Zamir Ben-Hur, David Lou Alon, Philip W. Robinson, Ravish Mehra

Research output: Contribution to conferencePaperpeer-review

13 Scopus citations

Abstract

Reproduction of virtual sound sources that are perceptually indistinguishable from real-world sounds is impossible without accurate representation of the virtual sound source location. A key component in such a reproduction system is the Head-Related Transfer Function (HRTF), which is different for every individual. In this study, we introduce an experimental setup for accurate evaluation of the localization performance using a spatial sound reproduction system in dynamic listening conditions. The setup offers the possibility of comparing the evaluation results with real-world localization performance, and facilitates testing of different virtual reproduction conditions, such as different HRTFs or different representations and interpolation methods of the HRTFs. Localization experiments are conducted, comparing real-world sound sources with virtual sound sources using high-resolution individual HRTFs, sparse individual HRTFs and a generic HRTF.

Original languageEnglish
StatePublished - 1 Jan 2020
Externally publishedYes
Event2020 AES International Conference on Audio for Virtual and Augmented Reality, AVAR 2020 - Virtual, Online
Duration: 17 Aug 202019 Aug 2020

Conference

Conference2020 AES International Conference on Audio for Virtual and Augmented Reality, AVAR 2020
CityVirtual, Online
Period17/08/2019/08/20

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Acoustics and Ultrasonics

Fingerprint

Dive into the research topics of 'Localization of virtual sounds in dynamic listening using sparse HRTFs'. Together they form a unique fingerprint.

Cite this