Peer Selection with Noisy Assessments

Omer Lev, Nicholas Mattei, Paolo Turrini, Stanislav Zhydkov

Research output: Working paper/PreprintPreprint

22 Downloads (Pure)

Abstract

In the peer selection problem a group of agents must select a subset of themselves as winners for, e.g., peer-reviewed grants or prizes. Here, we take a Condorcet view of this aggregation problem, i.e., that there is a ground-truth ordering over the agents and we wish to select the best set of agents, subject to the noisy assessments of the peers. Given this model, some agents may be unreliable, while others might be self-interested, attempting to influence the outcome in their favour. In this paper we extend PeerNomination, the most accurate peer reviewing algorithm to date, into WeightedPeerNomination, which is able to handle noisy and inaccurate agents. To do this, we explicitly formulate assessors' reliability weights in a way that does not violate strategyproofness, and use this information to reweight their scores. We show analytically that a weighting scheme can improve the overall accuracy of the selection significantly. Finally, we implement several instances of reweighting methods and show empirically that our methods are robust in the face of noisy assessments.
Original languageEnglish GB
StatePublished - 21 Jul 2021

Keywords

  • cs.GT
  • cs.AI
  • cs.MA
  • 91A80, 91B10, 91B12, 91B14
  • J.4; I.2

Fingerprint

Dive into the research topics of 'Peer Selection with Noisy Assessments'. Together they form a unique fingerprint.

Cite this