Article ID | Journal | Published Year | Pages | File Type |
---|---|---|---|---|
532221 | Pattern Recognition | 2013 | 13 Pages |
In part I of this two-part study, we introduced a new optimal Bayesian classification methodology that utilizes the same modeling framework proposed in Bayesian minimum-mean-square error (MMSE) error estimation. Optimal Bayesian classification thus completes a Bayesian theory of classification, where both the classifier error and our estimate of the error may be simultaneously optimized and studied probabilistically within the assumed model. Having developed optimal Bayesian classifiers in discrete and Gaussian models in part I, here we explore properties of optimal Bayesian classifiers, in particular, invariance to invertible transformations, convergence to the Bayes classifier, and a connection to Bayesian robust classifiers. We also explicitly derive optimal Bayesian classifiers with non-informative priors, and explore relationships to linear and quadratic discriminant analysis (LDA and QDA), which may be viewed as plug-in rules under Gaussian modeling assumptions. Finally, we present several simulations addressing the robustness of optimal Bayesian classifiers to false modeling assumptions. Companion website: http://gsp.tamu.edu/Publications/supplementary/dalton12a.
► Recent work uses a Bayesian modeling framework to optimize and analyze classifier error estimates. ► Here we use the same Bayesian framework to also optimize classifier design. ► This work thus completes a Bayesian theory of classification based on optimizing performance. ► Here, in Part II, we explore invariance to invertible maps, consistency and special cases. ► We also compare to Bayesian robust classifiers and test robustness to false modeling assumptions.