TY - JOUR
T1 - Private learning of halfspaces
T2 - 34th Conference on Neural Information Processing Systems, NeurIPS 2020
AU - Kaplan, Haim
AU - Mansour, Yishay
AU - Stemmer, Uri
AU - Tsfadia, Eliad
N1 - Funding Information:
Haim Kaplan is partially supported by Israel Science Foundation (grant 1595/19), German-Israeli Foundation (grant 1367/2017), and the Blavatnik Family Foundation.
Funding Information:
Yishay Mansour has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No. 882396), and by the Israel Science Foundation (grant number 993/17).
Funding Information:
Uri Stemmer is supported in part by the Israel Science Foundation (grant 1871/19), and by the Cyber Security Research Center at Ben-Gurion University of the Negev.
Publisher Copyright:
© 2020 Neural information processing systems foundation. All rights reserved.
PY - 2020/1/1
Y1 - 2020/1/1
N2 - We present a differentially private learner for halfspaces over a finite grid G in Rd with sample complexity ˜ d2.5 · 2log* |G|, which improves the state-of-the-art result of [Beimel et al., COLT 2019] by a d2 factor. The building block for our learner is a new differentially private algorithm for approximately solving the linear feasibility problem: Given a feasible collection of m linear constraints of the form Ax = b, the task is to privately identify a solution x that satisfies most of the constraints. Our algorithm is iterative, where each iteration determines the next coordinate of the constructed solution x.
AB - We present a differentially private learner for halfspaces over a finite grid G in Rd with sample complexity ˜ d2.5 · 2log* |G|, which improves the state-of-the-art result of [Beimel et al., COLT 2019] by a d2 factor. The building block for our learner is a new differentially private algorithm for approximately solving the linear feasibility problem: Given a feasible collection of m linear constraints of the form Ax = b, the task is to privately identify a solution x that satisfies most of the constraints. Our algorithm is iterative, where each iteration determines the next coordinate of the constructed solution x.
UR - http://www.scopus.com/inward/record.url?scp=85099076812&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85099076812
SN - 1049-5258
VL - 2020-December
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
Y2 - 6 December 2020 through 12 December 2020
ER -