arXiv Analytics

Sign in

arXiv:2102.02504 [stat.ML]AbstractReferencesReviewsResources

Meta-strategy for Learning Tuning Parameters with Guarantees

Dimitri Meunier, Pierre Alquier

Published 2021-02-04Version 1

Online gradient methods, like the online gradient algorithm (OGA), often depend on tuning parameters that are difficult to set in practice. We consider an online meta-learning scenario, and we propose a meta-strategy to learn these parameters from past tasks. Our strategy is based on the minimization of a regret bound. It allows to learn the initialization and the step size in OGA with guarantees. We provide a regret analysis of the strategy in the case of convex losses. It suggests that, when there are parameters $\theta_1,\dots,\theta_T$ solving well tasks $1,\dots,T$ respectively and that are close enough one to each other, our strategy indeed improves on learning each task in isolation.

Related articles: Most relevant | Search more
arXiv:2210.13132 [stat.ML] (Published 2022-10-24)
PAC-Bayesian Offline Contextual Bandits With Guarantees
arXiv:2002.02892 [stat.ML] (Published 2020-02-07)
Sparse and Smooth: improved guarantees for Spectral Clustering in the Dynamic Stochastic Block Model
arXiv:2403.02051 [stat.ML] (Published 2024-03-04, updated 2025-05-12)
Privacy of SGD under Gaussian or Heavy-Tailed Noise: Guarantees without Gradient Clipping