کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
5041227 | 1474009 | 2017 | 9 صفحه PDF | دانلود رایگان |
- Sentences can be classified across three languages using neural activation patterns.
- Models trained on two languages have advantages over those trained on one language.
- The two-language advantage was selective to more abstract concept domains.
- RSA analysis resulted in similar sentence clusterings across three languages.
- The results revealed both the commonality and cultural-specifics of neural concept encodings.
This study extended cross-language semantic decoding (based on a concept's fMRI signature) to the decoding of sentences across three different languages (English, Portuguese and Mandarin). A classifier was trained on either the mapping between words and activation patterns in one language or the mappings in two languages (using an equivalent amount of training data), and then tested on its ability to decode the semantic content of a third language. The model trained on two languages was reliably more accurate than a classifier trained on one language for all three pairs of languages. This two-language advantage was selective to abstract concept domains such as social interactions and mental activity. Representational Similarity Analyses (RSA) of the inter-sentence neural similarities resulted in similar clustering of sentences in all the three languages, indicating a shared neural concept space among languages. These findings identify semantic domains that are common across these three languages versus those that are more language or culture-specific.
Journal: Brain and Language - Volume 175, December 2017, Pages 77-85