Sample Compression for Real-Valued Learners

    Research output: Contribution to journalConference articlepeer-review

    15 Scopus citations

    Abstract

    We give an algorithmically efficient version of the learner-to-compression scheme conversion in Moran and Yehudayoff (2016). We further extend this technique to real-valued hypotheses, to obtain a bounded-size sample compression scheme via an efficient reduction to a certain generic real-valued learning strategy. To our knowledge, this is the first general compressed regression result (regardless of efficiency or boundedness) guaranteeing uniform approximate reconstruction. Along the way, we develop a generic procedure for constructing weak real-valued learners out of abstract regressors; this result is also of independent interest. In particular, this result sheds new light on an open question of H. Simon (1997). We show applications to two regression problems: learning Lipschitz and bounded-variation functions.

    Original languageEnglish
    Pages (from-to)466-488
    Number of pages23
    JournalProceedings of Machine Learning Research
    Volume98
    StatePublished - 1 Jan 2019
    Event30th International Conference on Algorithmic Learning Theory, ALT 2019 - Chicago, United States
    Duration: 22 Mar 201924 Mar 2019

    Keywords

    • Boosting
    • Compression Scheme
    • Empirical Risk Minimization
    • Regression

    ASJC Scopus subject areas

    • Artificial Intelligence
    • Software
    • Control and Systems Engineering
    • Statistics and Probability

    Fingerprint

    Dive into the research topics of 'Sample Compression for Real-Valued Learners'. Together they form a unique fingerprint.

    Cite this