arXiv Analytics

Sign in

arXiv:2505.08557 [cs.LG]AbstractReferencesReviewsResources

Online Learning and Unlearning

Yaxi Hu, Bernhard Schölkopf, Amartya Sanyal

Published 2025-05-13Version 1

We formalize the problem of online learning-unlearning, where a model is updated sequentially in an online setting while accommodating unlearning requests between updates. After a data point is unlearned, all subsequent outputs must be statistically indistinguishable from those of a model trained without that point. We present two online learner-unlearner (OLU) algorithms, both built upon online gradient descent (OGD). The first, passive OLU, leverages OGD's contractive property and injects noise when unlearning occurs, incurring no additional computation. The second, active OLU, uses an offline unlearning algorithm that shifts the model toward a solution excluding the deleted data. Under standard convexity and smoothness assumptions, both methods achieve regret bounds comparable to those of standard OGD, demonstrating that one can maintain competitive regret bounds while providing unlearning guarantees.

Related articles: Most relevant | Search more
arXiv:1508.00842 [cs.LG] (Published 2015-08-04)
Perceptron like Algorithms for Online Learning to Rank
arXiv:1905.12721 [cs.LG] (Published 2019-05-29)
Matrix-Free Preconditioning in Online Learning
arXiv:1807.01280 [cs.LG] (Published 2018-07-03)
On the Computational Power of Online Gradient Descent