TY - GEN
T1 - EASY
T2 - 20th International Conference on Computational Linguistics and Intelligent Text Processing, CICLing 2019
AU - Litvak, Marina
AU - Vanetik, Natalia
AU - Veksler, Yael
N1 - Publisher Copyright:
© 2023, Springer Nature Switzerland AG.
PY - 2023/1/1
Y1 - 2023/1/1
N2 - Automatic text summarization aims at producing a shorter version of a document (or a document set). Extractive summarizers compile summaries by extracting a subset of sentences from a given text, while abstractive summarizers generate new sentences. Both types of summarizers strive to preserve the meaning of the original document as much as possible. Evaluation of summarization quality is a challenging task. Due the expense of human evaluations, many researchers prefer to evaluate their systems automatically, with help of software tools. Automatic evaluations are usually performed to provide comparisons between a system-generated summary and one or more human-written summaries, according to selected measures. However, a single metric cannot reflect all quality-related aspects of a summary. For instance, evaluation of an extractive summarizer by comparing, at word level, its summaries to the abstracts written by humans is not good enough. This is so because the summaries being compared do not necessarily use the same vocabulary. Also, considering only single words does not reflect the coherency or readability of a generated summary. Multiple tools and metrics have been proposed in literature for evaluating the quality of summarizers. However, studies show that correlations between these metrics do not always hold. In this paper we present the EvAluation SYstem for Summarization (EASY), which enables the evaluation of summaries with several quality measures. The EASY system can also compare system-generated summaries to the extractive summaries produced by the OCCAMS baseline, which is considered the best possible extractive summarizer. EASY currently supports two languages–English and French–and is freely available online for the NLP community.
AB - Automatic text summarization aims at producing a shorter version of a document (or a document set). Extractive summarizers compile summaries by extracting a subset of sentences from a given text, while abstractive summarizers generate new sentences. Both types of summarizers strive to preserve the meaning of the original document as much as possible. Evaluation of summarization quality is a challenging task. Due the expense of human evaluations, many researchers prefer to evaluate their systems automatically, with help of software tools. Automatic evaluations are usually performed to provide comparisons between a system-generated summary and one or more human-written summaries, according to selected measures. However, a single metric cannot reflect all quality-related aspects of a summary. For instance, evaluation of an extractive summarizer by comparing, at word level, its summaries to the abstracts written by humans is not good enough. This is so because the summaries being compared do not necessarily use the same vocabulary. Also, considering only single words does not reflect the coherency or readability of a generated summary. Multiple tools and metrics have been proposed in literature for evaluating the quality of summarizers. However, studies show that correlations between these metrics do not always hold. In this paper we present the EvAluation SYstem for Summarization (EASY), which enables the evaluation of summaries with several quality measures. The EASY system can also compare system-generated summaries to the extractive summaries produced by the OCCAMS baseline, which is considered the best possible extractive summarizer. EASY currently supports two languages–English and French–and is freely available online for the NLP community.
KW - Summarization
KW - Summarization metrics
KW - Summary quality evaluation
UR - http://www.scopus.com/inward/record.url?scp=85149932659&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-24340-0_40
DO - 10.1007/978-3-031-24340-0_40
M3 - Conference contribution
AN - SCOPUS:85149932659
SN - 9783031243394
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 529
EP - 545
BT - Computational Linguistics and Intelligent Text Processing - 20th International Conference, CICLing 2019, Revised Selected Papers
A2 - Gelbukh, Alexander
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 7 April 2019 through 13 April 2019
ER -