Deep unsupervised feature selection by discarding nuisance and correlated features

Uri Shaham, Ofir Lindenbaum, Jonathan Svirsky, Yuval Kluger

Research output: Contribution to journalArticlepeer-review

8 Scopus citations


Modern datasets often contain large subsets of correlated features and nuisance features, which are not or loosely related to the main underlying structures of the data. Nuisance features can be identified using the Laplacian score criterion, which evaluates the importance of a given feature via its consistency with the Graph Laplacians’ leading eigenvectors. We demonstrate that in the presence of large numbers of nuisance features, the Laplacian must be computed on the subset of selected features rather than on the complete feature set. To do this, we propose a fully differentiable approach for unsupervised feature selection, utilizing the Laplacian score criterion to avoid the selection of nuisance features. We employ an autoencoder architecture to cope with correlated features, trained to reconstruct the data from the subset of selected features. Building on the recently proposed concrete layer that allows controlling for the number of selected features via architectural design, simplifying the optimization process. Experimenting on several real-world datasets, we demonstrate that our proposed approach outperforms similar approaches designed to avoid only correlated or nuisance features, but not both. Several state-of-the-art clustering results are reported. Our code is publically available at

Original languageEnglish
Pages (from-to)34-43
Number of pages10
JournalNeural Networks
StatePublished - 1 Aug 2022
Externally publishedYes


  • Concrete layer
  • Laplacian score
  • Unsupervised feature selection

ASJC Scopus subject areas

  • Artificial Intelligence
  • Cognitive Neuroscience


Dive into the research topics of 'Deep unsupervised feature selection by discarding nuisance and correlated features'. Together they form a unique fingerprint.

Cite this