Learning with metric losses

Dan Tsir Cohen, Aryeh Kontorovich

Research output: Contribution to journalConference articlepeer-review

6 Scopus citations

Abstract

We propose a practical algorithm for learning mappings between two metric spaces, X and Y. Our procedure is strongly Bayes-consistent whenever X and Y are topologically separable and Y is “bounded in expectation” (our term; the separability assumption can be somewhat weakened). At this level of generality, ours is the first such learnability result for unbounded loss in the agnostic setting. Our technique is based on metric medoids (a variant of Fréchet means) and presents a significant departure from existing methods, which, as we demonstrate, fail to achieve Bayes-consistency on general instance- and label-space metrics. Our proofs introduce the technique of semi-stable compression, which may be of independent interest.

Original languageEnglish
Pages (from-to)662-700
Number of pages39
JournalProceedings of Machine Learning Research
Volume178
StatePublished - 1 Jan 2022
Event35th Conference on Learning Theory, COLT 2022 - London, United Kingdom
Duration: 2 Jul 20225 Jul 2022

Keywords

  • Bayes-consistency
  • metric space
  • regression
  • sample compression

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'Learning with metric losses'. Together they form a unique fingerprint.

Cite this