TY - GEN
T1 - Automatic Generation of Contrast Sets from Scene Graphs
T2 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021
AU - Bitton, Yonatan
AU - Stanovsky, Gabriel
AU - Schwartz, Roy
AU - Elhadad, Michael
N1 - Publisher Copyright:
© 2021 Association for Computational Linguistics.
PY - 2021/1/1
Y1 - 2021/1/1
N2 - Recent works have shown that supervised models often exploit data artifacts to achieve good test scores while their performance severely degrades on samples outside their training distribution. Contrast sets (Gardner et al., 2020) quantify this phenomenon by perturbing test samples in a minimal way such that the output label is modified. While most contrast sets were created manually, requiring intensive annotation effort, we present a novel method which leverages rich semantic input representation to automatically generate contrast sets for the visual question answering task. Our method computes the answer of perturbed questions, thus vastly reducing annotation cost and enabling thorough evaluation of models’ performance on various semantic aspects (e.g., spatial or relational reasoning). We demonstrate the effectiveness of our approach on the popular GQA dataset (Hudson and Manning, 2019) and its semantic scene graph image representation. We find that, despite GQA’s compositionality and carefully balanced label distribution, two strong models drop 13–17% in accuracy on our automatically-constructed contrast set compared to the original validation set. Finally, we show that our method can be applied to the training set to mitigate the degradation in performance, opening the door to more robust models.
AB - Recent works have shown that supervised models often exploit data artifacts to achieve good test scores while their performance severely degrades on samples outside their training distribution. Contrast sets (Gardner et al., 2020) quantify this phenomenon by perturbing test samples in a minimal way such that the output label is modified. While most contrast sets were created manually, requiring intensive annotation effort, we present a novel method which leverages rich semantic input representation to automatically generate contrast sets for the visual question answering task. Our method computes the answer of perturbed questions, thus vastly reducing annotation cost and enabling thorough evaluation of models’ performance on various semantic aspects (e.g., spatial or relational reasoning). We demonstrate the effectiveness of our approach on the popular GQA dataset (Hudson and Manning, 2019) and its semantic scene graph image representation. We find that, despite GQA’s compositionality and carefully balanced label distribution, two strong models drop 13–17% in accuracy on our automatically-constructed contrast set compared to the original validation set. Finally, we show that our method can be applied to the training set to mitigate the degradation in performance, opening the door to more robust models.
UR - http://www.scopus.com/inward/record.url?scp=85112054101&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85112054101
T3 - NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference
SP - 94
EP - 105
BT - NAACL-HLT 2021 - 2021 Conference of the North American Chapter of the Association for Computational Linguistics
PB - Association for Computational Linguistics (ACL)
Y2 - 6 June 2021 through 11 June 2021
ER -