Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
536053 | Pattern Recognition Letters | 2010 | 7 Pages |
Abstract
We present a method to model sound descriptor temporal profiles using segmental models. Unlike standard HMM, such an approach allows for the modeling of fine structures of temporal profiles with a reduced number of states. These states, we called primitives, can be chosen by the user using prior knowledge, and assembled to model symbolic musical elements. In this paper, we describe this general methodology and evaluate it on a dataset made of violin recording containing crescendo/decrescendo, glissando and sforzando. The results show that, in this context, the segmental model can segment and recognize these different musical elements with a satisfactory level.
Related Topics
Physical Sciences and Engineering
Computer Science
Computer Vision and Pattern Recognition
Authors
Julien Bloit, Nicolas Rasamimanana, Frédéric Bevilacqua,