TY - JOUR
T1 - Bounds on the sample complexity for private learning and private data release
AU - Beimel, Amos
AU - Brenner, Hai
AU - Kasiviswanathan, Shiva Prasad
AU - Nissim, Kobbi
N1 - Funding Information:
Amos Beimel’s research was partly supported by the Israel Science Foundation (grant No. 938/09) and by the Frankel Center for Computer Science at Ben-Gurion University. Shiva Prasad Kasiviswanathan thanks Los Alamos National Laboratory and IBM T.J. Watson Research Center for supporting him while this research was performed. Hai Brenner and Kobbi Nissim’s research was supported by the Israel Science Foundation (grant No. 860/06).
PY - 2014/3/1
Y1 - 2014/3/1
N2 - Learning is a task that generalizes many of the analyses that are applied to collections of data, in particular, to collections of sensitive individual information. Hence, it is natural to ask what can be learned while preserving individual privacy. Kasiviswanathan et al. (in SIAM J. Comput.; 40(3):793-826, 2011) initiated such a discussion. They formalized the notion of private learning, as a combination of PAC learning and differential privacy, and investigated what concept classes can be learned privately. Somewhat surprisingly, they showed that for finite, discrete domains (ignoring time complexity), every PAC learning task could be performed privately with polynomially many labeled examples; in many natural cases this could even be done in polynomial time. While these results seem to equate non-private and private learning, there is still a significant gap: the sample complexity of (non-private) PAC learning is crisply characterized in terms of the VC-dimension of the concept class, whereas this relationship is lost in the constructions of private learners, which exhibit, generally, a higher sample complexity. Looking into this gap, we examine several private learning tasks and give tight bounds on their sample complexity. In particular, we show strong separations between sample complexities of proper and improper private learners (such separation does not exist for non-private learners), and between sample complexities of efficient and inefficient proper private learners. Our results show that VC-dimension is not the right measure for characterizing the sample complexity of proper private learning. We also examine the task of private data release (as initiated by Blum et al. in STOC, pp. 609-618, 2008), and give new lower bounds on the sample complexity. Our results show that the logarithmic dependence on size of the instance space is essential for private data release.
AB - Learning is a task that generalizes many of the analyses that are applied to collections of data, in particular, to collections of sensitive individual information. Hence, it is natural to ask what can be learned while preserving individual privacy. Kasiviswanathan et al. (in SIAM J. Comput.; 40(3):793-826, 2011) initiated such a discussion. They formalized the notion of private learning, as a combination of PAC learning and differential privacy, and investigated what concept classes can be learned privately. Somewhat surprisingly, they showed that for finite, discrete domains (ignoring time complexity), every PAC learning task could be performed privately with polynomially many labeled examples; in many natural cases this could even be done in polynomial time. While these results seem to equate non-private and private learning, there is still a significant gap: the sample complexity of (non-private) PAC learning is crisply characterized in terms of the VC-dimension of the concept class, whereas this relationship is lost in the constructions of private learners, which exhibit, generally, a higher sample complexity. Looking into this gap, we examine several private learning tasks and give tight bounds on their sample complexity. In particular, we show strong separations between sample complexities of proper and improper private learners (such separation does not exist for non-private learners), and between sample complexities of efficient and inefficient proper private learners. Our results show that VC-dimension is not the right measure for characterizing the sample complexity of proper private learning. We also examine the task of private data release (as initiated by Blum et al. in STOC, pp. 609-618, 2008), and give new lower bounds on the sample complexity. Our results show that the logarithmic dependence on size of the instance space is essential for private data release.
KW - Differential privacy
KW - PAC learning
KW - Private data release
KW - Sample complexity
UR - http://www.scopus.com/inward/record.url?scp=84894624083&partnerID=8YFLogxK
U2 - 10.1007/s10994-013-5404-1
DO - 10.1007/s10994-013-5404-1
M3 - Article
AN - SCOPUS:84894624083
SN - 0885-6125
VL - 94
SP - 401
EP - 437
JO - Machine Learning
JF - Machine Learning
IS - 3
ER -