TY - GEN
T1 - Variational- and Metric-based Deep Latent Space for Out-of-Distribution Detection
AU - Dinari, Or
AU - Freifeld, Oren
N1 - Publisher Copyright:
© 2022 Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence, UAI 2022. All right reserved.
PY - 2022/1/1
Y1 - 2022/1/1
N2 - One popular deep-learning approach for the task of Out-Of-Distribution (OOD) detection is based on thresholding the values of per-class Gaussian likelihood of deep features. However, two issues arise with that approach: first, the distributions are often far from being Gaussian; second, many OOD data points fall within the effective support of the known classes' Gaussians. Thus, either way it is hard to find a good threshold. In contrast, our proposed solution for OOD detection is based on a new latent space where: 1) each known class is well captured by a nearly-isotropic Gaussian; 2) those Gaussians are far from each other and from the origin of the space (together, these properties effectively leave the area around the origin free for OOD data). Concretely, given a (possibly-trained) backbone deep net of choice, we use it to train a conditional variational model via a Kullback Leibler loss, a triplet loss, and a new distancing loss that pushes classes away from each other. During inference, the class-dependent log-likelihood values of a deep feature ensemble of the test point are also weighted based on reconstruction errors, improving further the decision rule. Experiments on popular benchmarks show that our method yields state-of-the-art results, a feat achieved despite the fact that, unlike some competitors, we make no use of OOD data for training or hyperparameter tuning. Our code is available at https://github.com/BGU-CS-VIL/vmdls.
AB - One popular deep-learning approach for the task of Out-Of-Distribution (OOD) detection is based on thresholding the values of per-class Gaussian likelihood of deep features. However, two issues arise with that approach: first, the distributions are often far from being Gaussian; second, many OOD data points fall within the effective support of the known classes' Gaussians. Thus, either way it is hard to find a good threshold. In contrast, our proposed solution for OOD detection is based on a new latent space where: 1) each known class is well captured by a nearly-isotropic Gaussian; 2) those Gaussians are far from each other and from the origin of the space (together, these properties effectively leave the area around the origin free for OOD data). Concretely, given a (possibly-trained) backbone deep net of choice, we use it to train a conditional variational model via a Kullback Leibler loss, a triplet loss, and a new distancing loss that pushes classes away from each other. During inference, the class-dependent log-likelihood values of a deep feature ensemble of the test point are also weighted based on reconstruction errors, improving further the decision rule. Experiments on popular benchmarks show that our method yields state-of-the-art results, a feat achieved despite the fact that, unlike some competitors, we make no use of OOD data for training or hyperparameter tuning. Our code is available at https://github.com/BGU-CS-VIL/vmdls.
UR - http://www.scopus.com/inward/record.url?scp=85146146784&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85146146784
T3 - Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence, UAI 2022
SP - 569
EP - 578
BT - Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence, UAI 2022
PB - Association For Uncertainty in Artificial Intelligence (AUAI)
T2 - 38th Conference on Uncertainty in Artificial Intelligence, UAI 2022
Y2 - 1 August 2022 through 5 August 2022
ER -