Article ID Journal Published Year Pages File Type
1148393 Journal of Statistical Planning and Inference 2015 13 Pages PDF
Abstract

•The paper addresses the long-standing spiking problem for isotonic regression models, by adding a penalty to the estimator at the lower and upper endpoints of the regression function.•The optimal penalty is shown to depend on the derivatives of the function at the boundaries.•Parametric bootstrap confidence intervals using the penalized estimator show substantial improvement in the coverage probabilities at boundaries compared to existing methods.•Improvements are also shown in the power of the hypothesis test of constant versus increasing regression function.

In isotonic regression, the mean function is assumed to be monotone increasing (or decreasing) but otherwise unspecified. The classical isotonic least-squares estimator is known to be inconsistent at boundaries; this is called the “spiking” problem. A penalty on the range of the regression function is proposed to correct the spiking problem for univariate and multivariate isotonic models. The penalized estimator is shown to be consistent everywhere for a wide range of sizes of the penalty parameter. For the univariate case, the optimal penalty is shown to depend on the derivatives of the true regression function at the boundaries. Pointwise confidence intervals are constructed using the penalized estimator and bootstrapping ideas; these are shown through simulations to behave well in moderate sized samples. Simulation studies also show that the power of the hypothesis test of constant versus increasing regression function improves substantially compared to the power of the test with unpenalized alternative, and also compares favorably to tests using parametric alternatives.

Related Topics
Physical Sciences and Engineering Mathematics Applied Mathematics
Authors
, , ,