How fast can we learn maximum entropy models of neural populations?

Elad Ganmor, Ronen Segev, Elad Schneidman

Research output: Contribution to journalArticlepeer-review

5 Scopus citations


Most of our knowledge about how the brain encodes information comes from recordings of single neurons. However, computations in the brain are carried out by large groups of neurons. Modelling the joint activity of many interacting elements is computationally hard because of the large number of possible activity patterns and limited experimental data. Recently it was shown in several different neural systems that maximum entropy pairwise models, which rely only on firing rates and pairwise correlations of neurons, are excellent models for the distribution of activity patterns of neural populations, and in particular, their responses to natural stimuli. Using simultaneous recordings of large groups of neurons in the vertebrate retina responding to naturalistic stimuli, we show here that the relevant statistics required for finding the pairwise model can be accurately estimated within seconds. Furthermore, while higher order statistics may, in theory, improve model accuracy, they are, in practice, harmful for times of up to 20 minutes due to sampling noise. Finally, we demonstrate that trading accuracy for entropy may actually improve model performance when data is limited, and suggest an optimization method that automatically adjusts model constraints in order to achieve good performance.

Original languageEnglish
Article number012020
JournalJournal of Physics: Conference Series
StatePublished - 1 Jan 2009

ASJC Scopus subject areas

  • General Physics and Astronomy


Dive into the research topics of 'How fast can we learn maximum entropy models of neural populations?'. Together they form a unique fingerprint.

Cite this