TY - GEN
T1 - Evaluation Guidelines to Deal with Implicit Phenomena to Assess Factuality in Data-to-Text Generation
AU - Eisenstadt, Roy
AU - Elhadad, Michael
N1 - Publisher Copyright:
© 2021 Association for Computational Linguistics.
PY - 2021/1/1
Y1 - 2021/1/1
N2 - Data-to-text generation systems are trained on large datasets, such as WebNLG, RotoWire, E2E or DART. Beyond traditional token-overlap evaluation metrics (BLEU or METEOR), a key concern faced by recent generators is to control the factuality of the generated text with respect to the input data specification. We report on our experience when developing an automatic factuality evaluation system for data-to-text generation that we are testing on WebNLG and E2E data. We aim to prepare gold data annotated manually to identify cases where the text communicates more information than is warranted based on the input data (extra) or fails to communicate data that is part of the input (missing). While analyzing reference (data, text) samples, we encountered a range of systematic uncertainties that are related to cases on implicit phenomena in text, and the nature of non-linguistic knowledge we expect to be involved when assessing factuality. We derive from our experience a set of evaluation guidelines to reach high inter-annotator agreement on such cases.
AB - Data-to-text generation systems are trained on large datasets, such as WebNLG, RotoWire, E2E or DART. Beyond traditional token-overlap evaluation metrics (BLEU or METEOR), a key concern faced by recent generators is to control the factuality of the generated text with respect to the input data specification. We report on our experience when developing an automatic factuality evaluation system for data-to-text generation that we are testing on WebNLG and E2E data. We aim to prepare gold data annotated manually to identify cases where the text communicates more information than is warranted based on the input data (extra) or fails to communicate data that is part of the input (missing). While analyzing reference (data, text) samples, we encountered a range of systematic uncertainties that are related to cases on implicit phenomena in text, and the nature of non-linguistic knowledge we expect to be involved when assessing factuality. We derive from our experience a set of evaluation guidelines to reach high inter-annotator agreement on such cases.
UR - http://www.scopus.com/inward/record.url?scp=85138752523&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85138752523
T3 - UNIMPLICIT 2021 - 1st Workshop on Understanding Implicit and Underspecified Language, Proceedings of the Workshop
SP - 20
EP - 27
BT - UNIMPLICIT 2021 - 1st Workshop on Understanding Implicit and Underspecified Language, Proceedings of the Workshop
A2 - Roth, Michael
A2 - Tsarfaty, Reut
A2 - Goldberg, Yoav
PB - Association for Computational Linguistics (ACL)
T2 - 1st Workshop on Understanding Implicit and Underspecified Language, UNIMPLICIT 2021
Y2 - 5 August 2021 through 6 August 2021
ER -