An asymptotic analysis of the bootstrap bias correction for the empirical cte

Jae Youn Ahn, Nariankadu D. Shyamalkumar

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

The a-level Conditional Tail Expectation (CTE) of a continuous random variable X is defined as its conditional expectation given the event (X > qα) where qα represents its α-level quantile. It is well known that the empirical CTE (the average of the n(1 – a) largest order statistics in a sample of size n) is a negatively biased estimator of the CTE. This bias vanishes as the sample size increases but in small samples can be significant. Hence the need for bias correction. Although the bootstrap method has been suggested for correcting the bias of the empirical CTE, recent research shows that alternate kernel-based methods of bias correction perform better in some practical examples. To further understand this phenomenon, we conduct an asymptotic analysis of the exact bootstrap bias correction for the empirical CTE, focusing on its performance as a point estimator of the bias of the empirical CTE. We provide heuristics suggesting that the exact bootstrap bias correction is approximately a kernel-based estimator, albeit using a bandwidth that converges to zero faster than mean square optimal bandwidths. This approximation provides some insight into why the bootstrap method has markedly less residual bias, but at the cost of having higher variance. We prove a central limit theorem (CLT) for the exact bootstrap bias correction using an alternate representation as an /1 distance of the sample observations from the α-level empirical quantile. The CLT, in particular, shows that the bootstrap bias correction has a relative error of n. In contrast, for any given ε > 0, and under the assumption that the sampling density is sufficiently smooth, relative error of order O(n-½+ε) is attainable using kernel-based estimators. Thus, in an asymptotic sense, the bootstrap bias correction as a point estimator of the bias is not optimal in the case of smooth sampling densities. Bootstrapped risk measures have recently found interest as estimators in their own right; as an application we derive the CLT for the bootstrap expectation of the empirical CTE. We also report on a simulation study of the effect of small sample sizes on the quality of the approximation provided by the CLT. In support of the bootstrap method we show that the bootstrap bias correction is optimal if the sampling density is constrained only to be Lipschitz of order 1/2 (or, loosely speaking, to have only half a derivative). Because in practice densities are at least twice differentiable, this optimality result largely fails to make the bootstrap method attractive to practitioners.

Original languageEnglish
Pages (from-to)217-234
Number of pages18
JournalNorth American Actuarial Journal
Volume14
Issue number2
DOIs
StatePublished - 1 Apr 2010

Fingerprint

Dive into the research topics of 'An asymptotic analysis of the bootstrap bias correction for the empirical cte'. Together they form a unique fingerprint.

Cite this