Article ID Journal Published Year Pages File Type
4949235 Computational Statistics & Data Analysis 2017 18 Pages PDF
Abstract
Recently, researchers have proposed a variety of new methods for employing exploratory data mining algorithms to address missing data. Two promising classes of missing data methods take advantage of classification and regression trees and random forests. A first method uses the predicted probabilities of response (vs. non-response) generated by a CART analysis to create inverse probability weights. This method has been shown to perform well in prior simulations when nonresponse was generated by tree-based structures, even under low sample sizes. A second method uses the values falling in terminal nodes of CART trees to generate multiple imputations. In prior studies, these methods performed well at estimating main effects and interactions in regression models when sample sizes were large (N=1000), but their performance was not evaluated under small sample conditions. In the present research, we assess the performance of CART-based weights and CART-based imputations under low sample sizes (N=125 or 250) and nonnormality when missing data are generated by smooth functions (linear, quadratic, cubic, interactive). Results suggest that random forest weights excel under low sample sizes, regardless of nonnormality, whereas CART multiple imputation is more efficient with larger samples (N=500 or 1000).
Related Topics
Physical Sciences and Engineering Computer Science Computational Theory and Mathematics
Authors
, ,