Active regression by stratification

Sivan Sabato, Remi Munos

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


We propose a new active learning algorithm for parametric linear regression with random design. We provide finite sample convergence guarantees for general distributions in the misspecified model. This is the first active learner for this setting that provably can improve over passive learning. Unlike other learning settings (such as classification), in regression the passive learning rate of O(1/ε) cannot in general be improved upon. Nonetheless, the so-called 'constant' in the rate of convergence, which is characterized by a distribution-dependent risk, can be improved in many cases. For a given distribution, achieving the optimal risk requires prior knowledge of the distribution. Following the stratification technique advocated in Monte-Carlo function integration, our active learner approaches the optimal risk using piecewise constant approximations.

Original languageEnglish GB
Title of host publication Advances in Neural Information Processing Systems 27 (NIPS 2014)
Number of pages9
StatePublished - 2014
Event28th Annual Conference on Neural Information Processing Systems 2014, NIPS 2014 - Montreal, Canada
Duration: 8 Dec 201413 Dec 2014


Conference28th Annual Conference on Neural Information Processing Systems 2014, NIPS 2014

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing


Dive into the research topics of 'Active regression by stratification'. Together they form a unique fingerprint.

Cite this