TY - JOUR
T1 - Visual interpretability of image-based classification models by generative latent space disentanglement applied to in vitro fertilization
AU - Rotem, Oded
AU - Schwartz, Tamar
AU - Maor, Ron
AU - Tauber, Yishay
AU - Shapiro, Maya Tsarfati
AU - Meseguer, Marcos
AU - Gilboa, Daniella
AU - Seidman, Daniel S.
AU - Zaritsky, Assaf
N1 - Publisher Copyright:
© The Author(s) 2024.
PY - 2024/12/1
Y1 - 2024/12/1
N2 - The success of deep learning in identifying complex patterns exceeding human intuition comes at the cost of interpretability. Non-linear entanglement of image features makes deep learning a “black box” lacking human meaningful explanations for the models’ decision. We present DISCOVER, a generative model designed to discover the underlying visual properties driving image-based classification models. DISCOVER learns disentangled latent representations, where each latent feature encodes a unique classification-driving visual property. This design enables “human-in-the-loop” interpretation by generating disentangled exaggerated counterfactual explanations. We apply DISCOVER to interpret classification of in vitro fertilization embryo morphology quality. We quantitatively and systematically confirm the interpretation of known embryo properties, discover properties without previous explicit measurements, and quantitatively determine and empirically verify the classification decision of specific embryo instances. We show that DISCOVER provides human-interpretable understanding of “black box” classification models, proposes hypotheses to decipher underlying biomedical mechanisms, and provides transparency for the classification of individual predictions.
AB - The success of deep learning in identifying complex patterns exceeding human intuition comes at the cost of interpretability. Non-linear entanglement of image features makes deep learning a “black box” lacking human meaningful explanations for the models’ decision. We present DISCOVER, a generative model designed to discover the underlying visual properties driving image-based classification models. DISCOVER learns disentangled latent representations, where each latent feature encodes a unique classification-driving visual property. This design enables “human-in-the-loop” interpretation by generating disentangled exaggerated counterfactual explanations. We apply DISCOVER to interpret classification of in vitro fertilization embryo morphology quality. We quantitatively and systematically confirm the interpretation of known embryo properties, discover properties without previous explicit measurements, and quantitatively determine and empirically verify the classification decision of specific embryo instances. We show that DISCOVER provides human-interpretable understanding of “black box” classification models, proposes hypotheses to decipher underlying biomedical mechanisms, and provides transparency for the classification of individual predictions.
UR - http://www.scopus.com/inward/record.url?scp=85202444875&partnerID=8YFLogxK
U2 - 10.1038/s41467-024-51136-9
DO - 10.1038/s41467-024-51136-9
M3 - Article
C2 - 39191720
AN - SCOPUS:85202444875
SN - 2041-1723
VL - 15
JO - Nature Communications
JF - Nature Communications
IS - 1
M1 - 7390
ER -