Sample Compression for Real-Valued Learners

Steve Hanneke, Aryeh Kontorovich, Menachem Sadigurschi

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

We give an algorithmically efficient version of the learner-to-compression scheme conversion in Moran and Yehudayoff (2016). We further extend this technique to real-valued hypotheses, to obtain a bounded-size sample compression scheme via an efficient reduction to a certain generic real-valued learning strategy. To our knowledge, this is the first general compressed regression result (regardless of efficiency or boundedness) guaranteeing uniform approximate reconstruction. Along the way, we develop a generic procedure for constructing weak real-valued learners out of abstract regressors; this result is also of independent interest. In particular, this result sheds new light on an open question of H. Simon (1997). We show applications to two regression problems: learning Lipschitz and bounded-variation functions.
Original languageEnglish GB
Title of host publicationProceedings of the 30th International Conference on Algorithmic Learning Theory
EditorsAurélien Garivier, Satyen Kale
Place of PublicationChicago, Illinois
PublisherPMLR
Pages466-488
Number of pages23
Volume98
StatePublished - 2019

Publication series

NameProceedings of Machine Learning Research
PublisherPMLR

Fingerprint

Dive into the research topics of 'Sample Compression for Real-Valued Learners'. Together they form a unique fingerprint.

Cite this