Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
5631186 | NeuroImage | 2017 | 11 Pages |
â¢Convolutional network layer image representations explain ventral stream fMRI.â¢This mapping follows the known hierarchical organisation.â¢Results from both static images and video stimuli.â¢A full brain predictive model synthesizes brain maps for other visual experiments.â¢Only deep models can reproduce observed BOLD activity.
Convolutional networks used for computer vision represent candidate models for the computations performed in mammalian visual systems. We use them as a detailed model of human brain activity during the viewing of natural images by constructing predictive models based on their different layers and BOLD fMRI activations. Analyzing the predictive performance across layers yields characteristic fingerprints for each visual brain region: early visual areas are better described by lower level convolutional net layers and later visual areas by higher level net layers, exhibiting a progression across ventral and dorsal streams. Our predictive model generalizes beyond brain responses to natural images. We illustrate this on two experiments, namely retinotopy and face-place oppositions, by synthesizing brain activity and performing classical brain mapping upon it. The synthesis recovers the activations observed in the corresponding fMRI studies, showing that this deep encoding model captures representations of brain function that are universal across experimental paradigms.
Graphical abstractDownload high-res image (348KB)Download full-size image