کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
925300 | 1474050 | 2014 | 14 صفحه PDF | دانلود رایگان |
• We recorded simultaneous EEG–MEG to speech and matched nonspeech sound features.
• Distinct neural substrates for speech and nonspeech processing were found.
• The existence of preattentive cortical memory traces for speech features is supported.
• Experience-dependent modulation of cortical processing differs between hemispheres.
We addressed the neural organization of speech versus nonspeech sound processing by investigating preattentive cortical auditory processing of changes in five features of a consonant–vowel syllable (consonant, vowel, sound duration, frequency, and intensity) and their acoustically matched nonspeech counterparts in a simultaneous EEG–MEG recording of mismatch negativity (MMN/MMNm). Overall, speech–sound processing was enhanced compared to nonspeech sound processing. This effect was strongest for changes which affect word meaning (consonant, vowel, and vowel duration) in the left and for the vowel identity change in the right hemisphere also. Furthermore, in the right hemisphere, speech–sound frequency and intensity changes were processed faster than their nonspeech counterparts, and there was a trend for speech-enhancement in frequency processing. In summary, the results support the proposed existence of long-term memory traces for speech sounds in the auditory cortices, and indicate at least partly distinct neural substrates for speech and nonspeech sound processing.
Journal: Brain and Language - Volume 130, March 2014, Pages 19–32