کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
536377 | 870505 | 2014 | 12 صفحه PDF | دانلود رایگان |
• We proposed a LLHMM handwritten word classifier for binary data.
• We prove the equivalence between a Bernoulli HMM classifier and the LLHMM.
• We propose a MMI training scheme for BHMMs.
• We test the proposed MMI training scheme on the RIMES database.
• MMI BHMMs (15%) outperforms conventional BHMMs (21%) using half of the parameters.
Bernoulli HMMs (BHMMs) have been successfully applied to handwritten text recognition (HTR) tasks such as continuous and isolated handwritten words. BHMMs belong to the generative model family and, hence, are usually trained by (joint) maximum likelihood estimation (MLE) by means of the Baum–Welch algorithm. Despite the good properties of the MLE criterion, there are better training criteria such as maximum mutual information (MMI). The MMI is the most widespread criterion to train discriminative models such as log-linear (or maximum entropy) models. Inspired by a BHMM classifier, in this work, a log-linear HMM (LLHMM) for binary data is proposed. The proposed model is proved to be equivalent to the BHMM classifier, and, in this way, a discriminative training framework for BHMM classifiers is defined. The behavior of the proposed discriminative training framework is deeply studied in a well known task of isolated word recognition, the RIMES database.
Journal: Pattern Recognition Letters - Volume 35, 1 January 2014, Pages 157–168