arXiv:1512.06290 [math.ST]AbstractReferencesReviewsResources
On the finite Sample Properties of Regularized M-estimators
Published 2015-12-19Version 1
We propose a general framework for regularization in M-estimation problems under time dependent (absolutely regular-mixing) data which encompasses many of the existing estimators. We derive non-asymptotic concentration bounds for the regularized M-estimator. The concentration rate exhibits a "variance-bias" trade-off, with the "variance" term being governed by a novel measure of the "size" of the parameter set. We also show that the mixing structure affect the variance term by scaling the number of observations; depending on the decay rate of the mixing coefficients, this scaling can even affect the asymptotic behavior. Finally, we propose a data-driven method for choosing the tuning parameters of the regularized estimator which yield the same (up to constants) concentration bound as one that optimally balances the "(squared) bias" and "variance" terms. We illustrate the results with several canonical examples of, both, non-parametric and high-dimensional models.