کد مقاله | کد نشریه | سال انتشار | مقاله انگلیسی | نسخه تمام متن |
---|---|---|---|---|
1155063 | 958438 | 2009 | 9 صفحه PDF | دانلود رایگان |

For heavy tails, with a positive tail index γγ, classical tail index estimators, like the Hill estimator, are known to be quite sensitive to the number of top order statistics kk used in the estimation, whereas second-order reduced-bias estimators show much less sensitivity to changes in kk. In the recent minimum-variance reduced-bias (MVRB) tail index estimators, the estimation of the second order parameters in the bias has been performed at a level k1k1 of a larger order than that of the level kk at which we compute the tail index estimators. Such a procedure enables us to keep the asymptotic variance of the new estimators equal to the asymptotic variance of the Hill estimator, for all kk at which we can guarantee the asymptotic normality of the Hill statistics. These values of kk, as well as larger values of kk, will also enable us to guarantee the asymptotic normality of the reduced-bias estimators, but, to reach the minimal mean squared error of these MVRB estimators, we need to work with levels kk and k1k1 of the same order. In this note we derive the way the asymptotic variance varies as a function of qq, the finite limiting value of k/k1k/k1, as the sample size nn increases to infinity.
Journal: Statistics & Probability Letters - Volume 79, Issue 3, 1 February 2009, Pages 295–303