THE SAMPLE COMPLEXITY OF DISTRIBUTION-FREE PARITY LEARNING IN THE ROBUST SHUFFLE MODEL

Kobbi Nissim, Chao Yan

Research output: Contribution to journalArticlepeer-review

Abstract

We provide a lower bound on the sample complexity of distribution-free parity learning in the realizable case in the shuffle model of differential privacy. Namely, we show that the sample complexity of learning d-bit parity functions is Ω(2d/2). Our result extends a recent similar lower bound on the sample complexity of private agnostic learning of parity functions in the shuffle model by Cheu and Ullman (12). We also sketch a simple shuffle model protocol demonstrating that our results are tight up to poly(d) factors.

Original languageEnglish
Pages (from-to)1-14
Number of pages14
JournalJournal of Privacy and Confidentiality
Volume12
Issue number2
DOIs
StatePublished - 1 Jan 2022
Externally publishedYes

Keywords

  • Differential privacy
  • parity learning
  • private learning

ASJC Scopus subject areas

  • Computer Science (miscellaneous)
  • Statistics and Probability
  • Computer Science Applications

Fingerprint

Dive into the research topics of 'THE SAMPLE COMPLEXITY OF DISTRIBUTION-FREE PARITY LEARNING IN THE ROBUST SHUFFLE MODEL'. Together they form a unique fingerprint.

Cite this