TY - GEN
T1 - How Confident Was Your Reviewer? Estimating Reviewer Confidence from Peer Review Texts
AU - Bharti, Prabhat Kumar
AU - Ghosal, Tirthankar
AU - Agrawal, Mayank
AU - Ekbal, Asif
N1 - Publisher Copyright:
© 2022, Springer Nature Switzerland AG.
PY - 2022/1/1
Y1 - 2022/1/1
N2 - The scholarly peer-reviewing system is the primary means to ensure the quality of scientific publications. An area or program chair relies on the reviewer’s confidence score to address conflicting reviews and borderline cases. Usually, reviewers themselves disclose how confident they are in reviewing a certain paper. However, there could be inconsistencies in what reviewers self-annotate themselves versus how the preview text appears to the readers. This is the job of the area or program chair to consider such inconsistencies and make a reasonable judgment. Peer review texts could be a valuable source of Natural Language Processing (NLP) studies, and the community is uniquely poised to investigate some inconsistencies in the paper vetting system. Here in this work, we attempt to automatically estimate how confident was the reviewer directly from the review text. We experiment with five data-driven methods: Linear Regression, Decision Tree, Support Vector Regression, Bidirectional Encoder Representations from Transformers (BERT), and a hybrid of Bidirectional Long-Short Term Memory (BiLSTM) and Convolutional Neural Networks (CNN) on Bidirectional Encoder Representations from Transformers (BERT), to predict the confidence score of the reviewer. Our experiments show that the deep neural model grounded on BERT representations generates encouraging performance.
AB - The scholarly peer-reviewing system is the primary means to ensure the quality of scientific publications. An area or program chair relies on the reviewer’s confidence score to address conflicting reviews and borderline cases. Usually, reviewers themselves disclose how confident they are in reviewing a certain paper. However, there could be inconsistencies in what reviewers self-annotate themselves versus how the preview text appears to the readers. This is the job of the area or program chair to consider such inconsistencies and make a reasonable judgment. Peer review texts could be a valuable source of Natural Language Processing (NLP) studies, and the community is uniquely poised to investigate some inconsistencies in the paper vetting system. Here in this work, we attempt to automatically estimate how confident was the reviewer directly from the review text. We experiment with five data-driven methods: Linear Regression, Decision Tree, Support Vector Regression, Bidirectional Encoder Representations from Transformers (BERT), and a hybrid of Bidirectional Long-Short Term Memory (BiLSTM) and Convolutional Neural Networks (CNN) on Bidirectional Encoder Representations from Transformers (BERT), to predict the confidence score of the reviewer. Our experiments show that the deep neural model grounded on BERT representations generates encouraging performance.
KW - Confidence prediction
KW - Deep neural network
KW - Peer reviews
UR - http://www.scopus.com/inward/record.url?scp=85131117674&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-06555-2_9
DO - 10.1007/978-3-031-06555-2_9
M3 - Conference contribution
AN - SCOPUS:85131117674
SN - 9783031065545
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 126
EP - 139
BT - Document Analysis Systems - 15th IAPR International Workshop, DAS 2022, Proceedings
A2 - Uchida, Seiichi
A2 - Barney, Elisa
A2 - Eglin, Véronique
PB - Springer Science and Business Media Deutschland GmbH
T2 - 15th IAPR International Workshop on Document Analysis Systems, DAS 2022
Y2 - 22 May 2022 through 25 May 2022
ER -