کد مقاله کد نشریه سال انتشار مقاله انگلیسی نسخه تمام متن
530292 869756 2012 16 صفحه PDF دانلود رایگان
عنوان انگلیسی مقاله ISI
Model sparsity and brain pattern interpretation of classification models in neuroimaging
موضوعات مرتبط
مهندسی و علوم پایه مهندسی کامپیوتر چشم انداز کامپیوتر و تشخیص الگو
پیش نمایش صفحه اول مقاله
Model sparsity and brain pattern interpretation of classification models in neuroimaging
چکیده انگلیسی

Interest is increasing in applying discriminative multivariate analysis techniques to the analysis of functional neuroimaging data. Model interpretation is of great importance in the neuroimaging context, and is conventionally based on a ‘brain map’ derived from the classification model. In this study we focus on the relative influence of model regularization parameter choices on both the model generalization, the reliability of the spatial patterns extracted from the classification model, and the ability of the resulting model to identify relevant brain networks defining the underlying neural encoding of the experiment. For a support vector machine, logistic regression and Fisher's discriminant analysis we demonstrate that selection of model regularization parameters has a strong but consistent impact on the generalizability and both the reproducibility and interpretable sparsity of the models for both ℓ2ℓ2 and ℓ1ℓ1 regularization. Importantly, we illustrate a trade-off between model spatial reproducibility and prediction accuracy. We show that known parts of brain networks can be overlooked in pursuing maximization of classification accuracy alone with either ℓ2ℓ2 and/or ℓ1ℓ1 regularization. This supports the view that the quality of spatial patterns extracted from models cannot be assessed purely by focusing on prediction accuracy. Our results instead suggest that model regularization parameters must be carefully selected, so that the model and its visualization enhance our ability to interpret the brain.


► We consider classification models widely used within the neuroimaging community.
► Within a resampling framework we evaluate the importance of appropriate selection of model regularization parameters.
► We illustrate a trade-off between model visualization reproducibility and prediction accuracy.
► The quality of spatial patterns extracted from models cannot be assessed purely by focusing on prediction accuracy.
► Optimizing prediction accuracy does not ensure discovery of the relevant brain networks.

ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Pattern Recognition - Volume 45, Issue 6, June 2012, Pages 2085–2100
نویسندگان
, , , , ,