Article ID Journal Published Year Pages File Type
9653511 Neurocomputing 2005 6 Pages PDF
Abstract
Mutual information enjoys wide use in the computational neuroscience community for analyzing spiking neural systems. Its direct calculation is difficult because estimating the joint stimulus-response distribution requires a prohibitive amount of data. Consequently, several techniques have appeared for bounding mutual information that rely on less data. We examine two upper bound techniques and find that they are unreliable and can introduce strong assumptions about the neural code. We also examine two lower bounds, showing that they can be very loose and possibly bear little relation to the mutual information's actual value.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, ,