Article ID Journal Published Year Pages File Type
9653127 Neural Networks 2005 9 Pages PDF
Abstract
This paper discusses the task of learning a classifier from observed data containing missing values amongst the inputs which are missing completely at random1. A non-parametric perspective is adopted by defining a modified risk taking into account the uncertainty of the predicted outputs when missing values are involved. It is shown that this approach generalizes the approach of mean imputation in the linear case and the resulting kernel machine reduces to the standard Support Vector Machine (SVM) when no input values are missing. Furthermore, the method is extended to the multivariate case of fitting additive models using componentwise kernel machines, and an efficient implementation is based on the Least Squares Support Vector Machine (LS-SVM) classifier formulation.
Related Topics
Physical Sciences and Engineering Computer Science Artificial Intelligence
Authors
, , , ,