Abstract
Reproducing accurate and perceptually realistic spatial audio for augmented and virtual reality (AR/VR) requires the headphones to have a flat frequency response. This can be achieved by equalizing the headphone transducers' output given the transfer function between the transducer and the human ear, referred to as Ear Acoustic Response (EAR). EAR is unique to every individual and is a function of the transducer characteristics, the user's anthropometric features (e.g. ear and head shape) and the interactions between the two. This paper proposes a novel method to infer the EAR given the ear features of any listener using a probabilistic framework and a sub-sample of the population as prior. We introduce an approach to assess the level of personalization achieved and benchmark the improvements delivered by the proposed algorithm relative to a generic solution.
Original language | English |
---|---|
Journal | Proceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing |
DOIs | |
State | Published - 1 Jan 2023 |
Externally published | Yes |
Event | 48th IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2023 - Rhodes Island, Greece Duration: 4 Jun 2023 → 10 Jun 2023 |
Keywords
- AR/VR
- EAR
- Gaussian Processes
- HRTF
- HpTF
- Personalized Recommendation
- Spatial Audio
ASJC Scopus subject areas
- Software
- Signal Processing
- Electrical and Electronic Engineering