Article ID Journal Published Year Pages File Type
6869793 Computational Statistics & Data Analysis 2014 11 Pages PDF
Abstract
Variable selection has been suggested for Random Forests to improve data prediction and interpretation. However, the basic element, i.e. variable importance measures, cannot be computed straightforward when there are missing values in the predictor variables. Possible solutions are multiple imputation, complete case analysis and the use of a self-contained importance measure that is able to deal with missing values. Simulation and application studies have been conducted to investigate the properties of these procedures when combined with two popular variable selection methods. Findings and recommendations: Complete case analysis should not be used as it led to inaccurate variable selection. Multiple imputation is the method of choice if the selection of a variable is supposed to reflect its potential relevance in a complete data setting. However, Random Forests are commonly used without any preprocessing of the data as they are known to implicitly deal with missing values. In such a case, the application of the self-contained importance measure permits the selection of variables that are of relevance in these actual prediction models.
Related Topics
Physical Sciences and Engineering Computer Science Computational Theory and Mathematics
Authors
, ,