Sparse low-order interaction network underlies a highly correlated and learnable neural population code

Elad Ganmor, Ronen Segev, Elad Schneidman

Research output: Contribution to journalArticlepeer-review

144 Scopus citations

Abstract

Information is carried in the brain by the joint activity patterns of large groups of neurons. Understanding the structure and function of population neural codes is challenging because of the exponential number of possible activity patterns and dependencies among neurons. We report here that for groups of ∼100 retinal neurons responding to natural stimuli, pairwise-based models, which were highly accurate for small networks, are no longer sufficient. We show that because of the sparse nature of the neural code, the higher-order interactions can be easily learned using a novel model and that a very sparse low-order interaction network underlies the code of large populations of neurons. Additionally, we show that the interaction network is organized in a hierarchical and modular manner, which hints at scalability. Our results suggest that learnability may be a key feature of the neural code.

Original languageEnglish
Pages (from-to)9679-9684
Number of pages6
JournalProceedings of the National Academy of Sciences of the United States of America
Volume108
Issue number23
DOIs
StatePublished - 7 Jun 2011

Keywords

  • Correlations
  • High-order
  • Maximum entropy
  • Neural networks
  • Sparseness

ASJC Scopus subject areas

  • General

Fingerprint

Dive into the research topics of 'Sparse low-order interaction network underlies a highly correlated and learnable neural population code'. Together they form a unique fingerprint.

Cite this