The process of peer review, evaluation, and selection is a fundamental aspect of modern science. Funding bodies and academic publications around the world employ experts to review and select the best science for funding and publication. But this is not unique to science — the process of evaluating and selecting the best from among a group of peers is much more general problem: a professional society Inay want to give a subset of its members awards based on the opinions of all members”, an instructor for a Massive Open Online Course (MOOC) may want to crowdsource grading; or a marketing company may select ideas from group brainstorming sessions based on peer evaluation. In all of these settings we wish to select a small set of winners that is judged to be the best by the community itself — which includes those who wish to win and who may have conflict of interests.
This problem, known as the peer selection problem, is the focus of this research. Within a peer selection setting there may be competing priorities and inherent biases amongst the set of reviewers, and it is necessary to develop methods and algorithms that align the individual incentives of reviewers with the overall goal of selecting the best set. The intellectual merit of this project lies in expanding our understanding and developing novel algorithms for the process of peer evaluation and peer selection. Within the fields that use peer review, conflict of interest and peer selection bias have been cited as an impediment for broader participation in the science.
The project will achieve its goal of expanding our knowledge and building mechanisms for peer evaluation and selection through four specific aims. (1) The first aim is to develop novel metrics for the evaluation of peer selection mechanisms by defining both normative and quantitative properties that allow to precisely describe features of the peer evaluation and selection process. (2) The second aim is to develop distributed peer selection mechanisms that are able to be used without requiring a centralized controller. This project will develop tools to understand how these mechanisms behave in this distributed setting as well as opportunities to create novel mechanisms for the unique challenges this setting poses. (3) The third aim is to develop our understanding of multi—stage peer evaluation for peer selection. Motivated by the rolling review cycle of many academic conferences, journals, and even some funding programs, there is a need to investigate the properties of peer evaluation and selection mechanisms when reviews (evaluations) may propagate between specific selection settings. (4) The final aim is to incentivize effort in peer selection: There is a fundamental tension between the classic social choice properties of impartiality, i.e., an agent may not affect their own probability of getting accepted, and provide incentives for reviewers to invest effort in the peer evaluation process. This project will develop a tool kit of mechanisms that allow system designers to rationally choose tradeoffs between the amount of information an agent knows, incentives for effort, and potential for malicious behavior.
|Effective start/end date||1/01/21 → …|
- United States-Israel Binational Science Foundation (BSF)