Complexity of hyperconcepts

Joel Ratsaby

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

In machine-learning, maximizing the sample margin can reduce the learning generalization error. Samples on which the target function has a large margin (γ) convey more information since they yield more accurate hypotheses. Let X be a finite domain and S denote the set of all samples S ⊆ X of fixed cardinality m. Let H be a class of hypotheses h on X. A hyperconcepth is defined as an indicator function for a set A ⊆ S of all samples on which the corresponding hypothesis h has a margin of at least γ. An estimate on the complexity of the class H of hyperconcepts h is obtained with explicit dependence on γ, the pseudo-dimension of H and m.

Original languageEnglish
Pages (from-to)2-10
Number of pages9
JournalTheoretical Computer Science
Volume363
Issue number1
DOIs
StatePublished - 25 Oct 2006

Keywords

  • Large-margin samples
  • Learning complexity
  • Pseudo-dimension
  • Sample-dependent error-bounds

Fingerprint

Dive into the research topics of 'Complexity of hyperconcepts'. Together they form a unique fingerprint.

Cite this