Article ID Journal Published Year Pages File Type
430201 Journal of Computer and System Sciences 2016 9 Pages PDF
Abstract

•The paper deals with learning large-margin multi-category classifiers.•Instead of the usual functional-based definition of sample-margin, we use the notion of sample-width [4].•Unlike in [4], classifiers map not simply from the real line, but from some metric space.•We obtain PAC-like learning generalization-error bounds that involve the sample width. These are presented as two theorems.•The results of this paper are applicable to machine learning, and have been used in [7] for learning case-based inference.

In a recent paper, the authors introduced the notion of sample width for binary classifiers defined on the set of real numbers. It was shown that the performance of such classifiers could be quantified in terms of this sample width. This paper considers how to adapt the idea of sample width so that it can be applied in cases where the classifiers are multi-category and are defined on some arbitrary metric space.

Related Topics
Physical Sciences and Engineering Computer Science Computational Theory and Mathematics
Authors
, ,