Abstract
We give an algorithmically efficient version of the learner-to-compression scheme conversion in Moran and Yehudayoff (2016). We further extend this technique to real-valued hypotheses, to obtain a bounded-size sample compression scheme via an efficient reduction to a certain generic real-valued learning strategy. To our knowledge, this is the first general compressed regression result (regardless of efficiency or boundedness) guaranteeing uniform approximate reconstruction. Along the way, we develop a generic procedure for constructing weak real-valued learners out of abstract regressors; this result is also of independent interest. In particular, this result sheds new light on an open question of H. Simon (1997). We show applications to two regression problems: learning Lipschitz and bounded-variation functions.
Original language | English |
---|---|
Pages (from-to) | 466-488 |
Number of pages | 23 |
Journal | Proceedings of Machine Learning Research |
Volume | 98 |
State | Published - 1 Jan 2019 |
Event | 30th International Conference on Algorithmic Learning Theory, ALT 2019 - Chicago, United States Duration: 22 Mar 2019 → 24 Mar 2019 |
Keywords
- Boosting
- Compression Scheme
- Empirical Risk Minimization
- Regression
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability