Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
7302610 | Neuroscience & Biobehavioral Reviews | 2017 | 26 Pages |
Abstract
Speech and music have structured rhythms. Here we discuss a major acoustic correlate of spoken and musical rhythms, the slow (0.25-32Â Hz) temporal modulations in sound intensity and compare the modulation properties of speech and music. We analyze these modulations using over 25Â h of speech and over 39Â h of recordings of Western music. We show that the speech modulation spectrum is highly consistent across 9 languages (including languages with typologically different rhythmic characteristics). A different, but similarly consistent modulation spectrum is observed for music, including classical music played by single instruments of different types, symphonic, jazz, and rock. The temporal modulations of speech and music show broad but well-separated peaks around 5 and 2Â Hz, respectively. These acoustically dominant time scales may be intrinsic features of speech and music, a possibility which should be investigated using more culturally diverse samples in each domain. Distinct modulation timescales for speech and music could facilitate their perceptual analysis and its neural processing.
Keywords
Related Topics
Life Sciences
Neuroscience
Behavioral Neuroscience
Authors
Nai Ding, Aniruddh D. Patel, Lin Chen, Henry Butler, Cheng Luo, David Poeppel,