Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
6268411 | Journal of Neuroscience Methods | 2015 | 11 Pages |
â¢We inspect the variability in results (data-analytical stability) of topological inference and add it to the evaluation protocol.â¢This permits quantitative evidence for reproducibility of results.â¢Methodological choices of inference type and smoothing impact data-analytical stability.â¢Data-analytical stability is easy to implement on real data and allows for better-informed methodological choices.
BackgroundCarp (2012) demonstrated the large variability that is present in the method sections of fMRI studies. This methodological variability between studies limits reproducible research.New methodEvaluation protocols for methods used in fMRI should include data-analytical stability measures quantifying the variability in results following choices in the methods. Data-analytical stability can be seen as a proxy for reproducibility. To illustrate how one can perform such evaluations, we study two competing approaches for topological feature based inference (random field theory and permutation based testing) and two competing methods for smoothing (Gaussian smoothing and adaptive smoothing). We compare these approaches from the perspective of data-analytical stability in real data, and additionally consider validity and reliability in simulations.ResultsThere is clear evidence that choices in the methods impact the validity, reliability and stability of the results. For the particular comparison studied, we find that permutation based methods render the most valid results. For stability and reliability, the performance of different smoothing and inference types depends on the setting. However, while being more reliable, adaptive smoothing can evoke less stable results when using larger kernel width, especially with cluster size based permutation inference.Comparison with existing methodsWhile existing evaluation methods focus on validity and reliability, we show that data-analytical stability enables to further distinguish between performance of different methods.ConclusionData-analytical stability is an important additional criterion that can easily be incorporated in evaluation protocols.