Robust Learning from Discriminative Feature Feedback

Sanjoy Dasgupta, Sivan Sabato

Research output: Contribution to journalConference articlepeer-review

4 Scopus citations

Abstract

Recent work introduced the model of learning from discriminative feature feedback, in which a human annotator not only provides labels of instances, but also identifies discriminative features that highlight important differences between pairs of instances. It was shown that such feedback can be conducive to learning, and makes it possible to efficiently learn some concept classes that would otherwise be intractable. However, these results all relied upon perfect annotator feedback. In this paper, we introduce a more realistic, robust version of the framework, in which the annotator is allowed to make mistakes. We show how such errors can be handled algorithmically, in both an adversarial and a stochastic setting. In particular, we derive regret bounds in both settings that, as in the case of a perfect annotator, are independent of the number of features. We show that this result cannot be obtained by a naive reduction from the robust setting to the non-robust setting.

Original languageEnglish
Pages (from-to)973-982
Number of pages10
JournalProceedings of Machine Learning Research
Volume108
StatePublished - 1 Jan 2020
Event23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020 - Virtual, Online
Duration: 26 Aug 202028 Aug 2020

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Robust Learning from Discriminative Feature Feedback'. Together they form a unique fingerprint.

Cite this