کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
8840616 | 1614692 | 2018 | 12 صفحه PDF | دانلود رایگان |
عنوان انگلیسی مقاله ISI
Taking Attention Away from the Auditory Modality: Context-dependent Effects on Early Sensory Encoding of Speech
ترجمه فارسی عنوان
توجه به دور از حالت مدون شنوایی: اثرات وابسته به محتوا در رمزگشایی حسی اولیه در گفتار
دانلود مقاله + سفارش ترجمه
دانلود مقاله ISI انگلیسی
رایگان برای ایرانیان
کلمات کلیدی
SSAFFRABRElectroencephalography - الکتروانسافالوگرافیStimulus-specific adaptation - انطباق خاص محرکERP - برنامه ریزی منابع سازمانfundamental frequency - بسامد پایه، فرکانس پایهMEG - بهanalysis of variance - تحلیل واریانسANOVA - تحلیل واریانس Analysis of varianceSVM - ماشین بردار پشتیبانیSupport vector machine - ماشین بردار پشتیبانیmagnetoencephalography - مغناطیس مغزEEG - نوار مغزیauditory brainstem response - پاسخ شنوایی مغزfrequency-following response - پاسخ فرکانس زیرEvent-related potential - پتانسیل وابسته به رویدادInferior colliculus - کولیکولوس پایینMachine learning - یادگیری ماشین
موضوعات مرتبط
علوم زیستی و بیوفناوری
علم عصب شناسی
علوم اعصاب (عمومی)
چکیده انگلیسی
Increasing visual perceptual load can reduce pre-attentive auditory cortical activity to sounds, a reflection of the limited and shared attentional resources for sensory processing across modalities. Here, we demonstrate that modulating visual perceptual load can impact the early sensory encoding of speech sounds, and that the impact of visual load is highly dependent on the predictability of the incoming speech stream. Participants (nâ¯=â¯20, 9 females) performed a visual search task of high (target similar to distractors) and low (target dissimilar to distractors) perceptual load, while early auditory electrophysiological responses were recorded to native speech sounds. Speech sounds were presented either in a 'repetitive context', or a less predictable 'variable context'. Independent of auditory stimulus context, pre-attentive auditory cortical activity was reduced during high visual load, relative to low visual load. We applied a data-driven machine learning approach to decode speech sounds from the early auditory electrophysiological responses. Decoding performance was found to be poorer under conditions of high (relative to low) visual load, when the incoming acoustic stream was predictable. When the auditory stimulus context was less predictable, decoding performance was substantially greater for the high (relative to low) visual load conditions. Our results provide support for shared attentional resources between visual and auditory modalities that substantially influence the early sensory encoding of speech signals in a context-dependent manner.
ناشر
Database: Elsevier - ScienceDirect (ساینس دایرکت)
Journal: Neuroscience - Volume 384, 1 August 2018, Pages 64-75
Journal: Neuroscience - Volume 384, 1 August 2018, Pages 64-75
نویسندگان
Zilong Xie, Rachel Reetzke, Bharath Chandrasekaran,