arXiv Analytics

Sign in

arXiv:math/0609020 [math.ST]AbstractReferencesReviewsResources

Current status data with competing risks: Consistency and rates of Convergence of the MLE

Piet Groeneboom, Marloes H. Maathuis, Jon A. Wellner

Published 2006-09-01, updated 2008-06-17Version 2

We study nonparametric estimation of the sub-distribution functions for current status data with competing risks. Our main interest is in the nonparametric maximum likelihood estimator (MLE), and for comparison we also consider a simpler ``naive estimator.'' Both types of estimators were studied by Jewell, van der Laan and Henneman [Biometrika (2003) 90 183--197], but little was known about their large sample properties. We have started to fill this gap, by proving that the estimators are consistent and converge globally and locally at rate $n^{1/3}$. We also show that this local rate of convergence is optimal in a minimax sense. The proof of the local rate of convergence of the MLE uses new methods, and relies on a rate result for the sum of the MLEs of the sub-distribution functions which holds uniformly on a fixed neighborhood of a point. Our results are used in Groeneboom, Maathuis and Wellner [Ann. Statist. (2008) 36 1064--1089] to obtain the local limiting distributions of the estimators.

Comments: Published in at http://dx.doi.org/10.1214/009053607000000974 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org)
Journal: Annals of Statistics 2008, Vol. 36, No. 3, 1031-1063
Categories: math.ST, stat.TH
Subjects: 62N01, 62G20, 62G05
Related articles: Most relevant | Search more
arXiv:math/0609021 [math.ST] (Published 2006-09-01, updated 2008-06-17)
Current status data with competing risks: Limiting distribution of the MLE
arXiv:1909.06164 [math.ST] (Published 2019-09-13)
Uniform convergence rate of nonparametric maximum likelihood estimator for the current status data with competing risks
arXiv:1902.06931 [math.ST] (Published 2019-02-19)
On the consistency of supervised learning with missing values