A framework for optimizing COVID-19 testing policy using a Multi Armed Bandit approach

Hagit Grushka-Cohen, Raphael Cohen, Bracha Shapira, Jacob Moran-Gilad, Lior Rokach

Research output: Working paper/PreprintPreprint

29 Downloads (Pure)

Abstract

Testing is an important part of tackling the COVID-19 pandemic. Availability of testing is a bottleneck due to constrained resources and effective prioritization of individuals is necessary. Here, we discuss the impact of different prioritization policies on COVID-19 patient discovery and the ability of governments and health organizations to use the results for effective decision making. We suggest a framework for testing that balances the maximal discovery of positive individuals with the need for population-based surveillance aimed at understanding disease spread and characteristics. This framework draws from similar approaches to prioritization in the domain of cyber-security based on ranking individuals using a risk score and then reserving a portion of the capacity for random sampling. This approach is an application of Multi-Armed-Bandits maximizing exploration/exploitation of the underlying distribution. We find that individuals can be ranked for effective testing using a few simple features, and that ranking them using such models we can capture 65% (CI: 64.7%-68.3%) of the positive individuals using less than 20% of the testing capacity or 92.1% (CI: 91.1%-93.2%) of positives individuals using 70% of the capacity, allowing reserving a significant portion of the tests for population studies. Our approach allows experts and decision-makers to tailor the resulting policies as needed allowing transparency into the ranking policy and the ability to understand the disease spread in the population and react quickly and in an informed manner.
Original languageEnglish GB
StatePublished - 28 Jul 2020

Publication series

NamearXiv PrePrint,

Keywords

  • cs.LG
  • stat.ML

Fingerprint

Dive into the research topics of 'A framework for optimizing COVID-19 testing policy using a Multi Armed Bandit approach'. Together they form a unique fingerprint.

Cite this