| Article ID | Journal | Published Year | Pages | File Type |
|---|---|---|---|---|
| 409456 | Neurocomputing | 2006 | 6 Pages |
Abstract
Decisions taken by support vector machines (SVM) are hard to interpret from a human perspective. We take advantage of a compact SVM solution previously developed, known as growing support vector classifier (GSVC), to provide interpretation to SVM decisions in terms of input space segmentation in Voronoi sections (determined by the prototypes extracted during the GSVC training method) plus rules built as a linear combination of input variables. We show by means of experiments on public domain datasets that the resulting interpretable machines have high fidelity, and an accuracy comparable to the SVM.
Related Topics
Physical Sciences and Engineering
Computer Science
Artificial Intelligence
Authors
A. Navia-Vázquez, E. Parrado-Hernández,
