TY - GEN
T1 - Privately Learning Thresholds
T2 - Closing the Exponential Gap
AU - Kaplan, Haim
AU - Ligett, Katrina
AU - Mansour, Yishay
AU - Naor, Moni
AU - Stemmer, Uri
PY - 2020
Y1 - 2020
N2 - We present a private agnostic learner for halfspaces over an arbitrary finite domain X ⊂ R d with sample complexity poly(d, 2log∗|X|). The building block for this learner is a differentially private algorithm for locating an approximate center point of m > poly(d, 2log∗|X|) points – a high dimensional generalization of the median function. Our construction establishes a relationship between these two problems that is reminiscent of the relation between the median and learning one-dimensional thresholds [Bun et al. FOCS ’15]. This relationship suggests that the problem of privately locating a center point may have further applications in the design of differentially private algorithms. We also provide a lower bound on the sample complexity for privately finding a point in the convex hull. For approximate differential privacy, we show a lower bound of m = Ω(d+log∗|X|), whereas for pure differential privacy m = Ω(d log |X|). Keywords: Differential privacy, Private PAC learning, Halfspaces, Quasi-concave functions
AB - We present a private agnostic learner for halfspaces over an arbitrary finite domain X ⊂ R d with sample complexity poly(d, 2log∗|X|). The building block for this learner is a differentially private algorithm for locating an approximate center point of m > poly(d, 2log∗|X|) points – a high dimensional generalization of the median function. Our construction establishes a relationship between these two problems that is reminiscent of the relation between the median and learning one-dimensional thresholds [Bun et al. FOCS ’15]. This relationship suggests that the problem of privately locating a center point may have further applications in the design of differentially private algorithms. We also provide a lower bound on the sample complexity for privately finding a point in the convex hull. For approximate differential privacy, we show a lower bound of m = Ω(d+log∗|X|), whereas for pure differential privacy m = Ω(d log |X|). Keywords: Differential privacy, Private PAC learning, Halfspaces, Quasi-concave functions
M3 - Conference contribution
VL - 125
T3 - Proceedings of Machine Learning Research
SP - 2263
EP - 2285
BT - Conference on Learning Theory, COLT 2020, 9-12 July 2020, Virtual Event [Graz, Austria]
A2 - Abernethy, Jacob D.
A2 - Agarwal, Shivani
PB - PMLR
ER -