Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6260436 | Current Opinion in Behavioral Sciences | 2016 | 6 Pages |
â¢A non-associative model of associative learning.â¢Focus on the encoding of intervals into memory.â¢Partly inspired by results showing interval memory in Purkinje cells.â¢Founded on minimum description length principle for stochastic model selection.â¢Explains cue competition, response timing and the parametric invariances.
Two principles of information theory - maximum entropy and minimum description length - motivate a computational model of associative learning that explains assignment of credit, response timing, and the parametric invariances. The maximum entropy principle gives rise to two distributions - the exponential and the evitable Gaussian - which naturally lend themselves to inference involving each of the two fundamental classes of predictors - enduring states and point events. These distributions are the 'atoms' from which more complex representations are built. The representation that is 'learned' is determined by the principle of minimum-description-length. In this theory, learning is a synonym for data compression: The representation of its experience that the animal learns is the representation that best allows the data of experience to be compressed.
Graphical abstractDownload full-size image