Boosting conditional probability estimators

Dan Gutfreund, Aryeh Kontorovich, Ran Levy, Michal Rosen-Zvi

Research output: Contribution to journalArticlepeer-review

1 Scopus citations


In the standard agnostic multiclass model, <instance, label > pairs are sampled independently from some underlying distribution. This distribution induces a conditional probability over the labels given an instance, and our goal in this paper is to learn this conditional distribution. Since even unconditional densities are quite challenging to learn, we give our learner access to <instance, conditional distribution > pairs. Assuming a base learner oracle in this model, we might seek a boosting algorithm for constructing a strong learner. Unfortunately, without further assumptions, this is provably impossible. However, we give a new boosting algorithm that succeeds in the following sense: given a base learner guaranteed to achieve some average accuracy (i.e., risk), we efficiently construct a learner that achieves the same level of accuracy with arbitrarily high probability. We give generalization guarantees of several different kinds, including distribution-free accuracy and risk bounds. None of our estimates depend on the number of boosting rounds and some of them admit dimension-free formulations.

Original languageEnglish
Pages (from-to)129-144
Number of pages16
JournalAnnals of Mathematics and Artificial Intelligence
Issue number1-3
StatePublished - 1 Mar 2017


  • Boosting
  • Conditional density


Dive into the research topics of 'Boosting conditional probability estimators'. Together they form a unique fingerprint.

Cite this