کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
536385 | 870505 | 2014 | 10 صفحه PDF | دانلود رایگان |
• Variant of hidden Markov models using contextual variables for handling variability.
• Extension to discriminative contextual Hidden Conditional Random Fields.
• Both modeling improves over their non contextual counterparts, HMM and HCRFs.
• Experiments on isolated handwritten character recognition from the IAM dataset.
There are two popular families of statistical models for dealing with sequences and in particular with handwriting signals, either on-line or off-line, the well known generative hidden Markov models and the more recently proposed discriminative Hidden Conditional Random Fields.One key issue in such modeling frameworks is to efficiently handle variability. The traditional approach consists in first removing as much as possible signal variability in the preprocessing stage, and to use more complex models, for instance in the case of hidden Markov models one increases the number of states and the Gaussian mixture size.We focus here on another kind of approaches where the probability distribution implemented by the models depends on a number of additional contextual variables, that are assumed fixed or that vary slowly along a sequence. The context may stand for emotion features in speech recognition, physical features in gesture recognition, gender, age, etc.We propose a framework for deriving markovian models that make use of such contextual information. This yields new models that we call Contextual hidden Markov models and contextual Hidden Conditional Random Fields. We detail learning algorithms for both models and investigate their performances on the IAM off-line handwriting dataset.
Journal: Pattern Recognition Letters - Volume 35, 1 January 2014, Pages 236–245