Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6266182 | Current Opinion in Neurobiology | 2016 | 8 Pages |
•Strongly correlated population codes are accurately described by low order models.•Population coding models enable learning the semantic organization of neural codebook.•Code of large populations may be learned by combining subnetworks hierarchically.•Power-law like codeword distribution suggests learnability of the code as a feature.
The ability to record the joint activity of large groups of neurons would allow for direct study of information representation and computation at the level of whole circuits in the brain. The combinatorial space of potential population activity patterns and neural noise imply that it would be impossible to directly map the relations between stimuli and population responses. Understanding of large neural population codes therefore depends on identifying simplifying design principles. We review recent results showing that strongly correlated population codes can be explained using minimal models that rely on low order relations among cells. We discuss the implications for large populations, and how such models allow for mapping the semantic organization of the neural codebook and stimulus space, and decoding.