TY - JOUR
T1 - Large width nearest prototype classification on general distance spaces
AU - Anthony, Martin
AU - Ratsaby, Joel
N1 - Funding Information:
This work was supported in part by a research grant from the Suntory and Toyota International Centres for Economics and Related Disciplines at the London School of Economics. The authors thank the referees for their comments which resulted in shorter proofs of Propositions 2 and 4 .
Publisher Copyright:
© 2018 Elsevier B.V.
PY - 2018/8/22
Y1 - 2018/8/22
N2 - In this paper we consider the problem of learning nearest-prototype classifiers in any finite distance space; that is, in any finite set equipped with a distance function. An important advantage of a distance space over a metric space is that the triangle inequality need not be satisfied, which makes our results potentially very useful in practice. We consider a family of binary classifiers for learning nearest-prototype classification on distance spaces, building on the concept of large-width learning which we introduced and studied in earlier works. Nearest-prototype is a more general version of the ubiquitous nearest-neighbor classifier: a prototype may or may not be a sample point. One advantage in the approach taken in this paper is that the error bounds depend on a ‘width’ parameter, which can be sample-dependent and thereby yield a tighter bound.
AB - In this paper we consider the problem of learning nearest-prototype classifiers in any finite distance space; that is, in any finite set equipped with a distance function. An important advantage of a distance space over a metric space is that the triangle inequality need not be satisfied, which makes our results potentially very useful in practice. We consider a family of binary classifiers for learning nearest-prototype classification on distance spaces, building on the concept of large-width learning which we introduced and studied in earlier works. Nearest-prototype is a more general version of the ubiquitous nearest-neighbor classifier: a prototype may or may not be a sample point. One advantage in the approach taken in this paper is that the error bounds depend on a ‘width’ parameter, which can be sample-dependent and thereby yield a tighter bound.
KW - Distance space
KW - LVQ
KW - Large margin learning
KW - Metric space
KW - Nearest-neighbor classification
UR - http://www.scopus.com/inward/record.url?scp=85046809141&partnerID=8YFLogxK
U2 - 10.1016/j.tcs.2018.04.045
DO - 10.1016/j.tcs.2018.04.045
M3 - Article
AN - SCOPUS:85046809141
VL - 738
SP - 65
EP - 79
JO - Theoretical Computer Science
JF - Theoretical Computer Science
SN - 0304-3975
ER -