arXiv Analytics

Sign in

arXiv:1611.05545 [math.PR]AbstractReferencesReviewsResources

Stochastic Gradient Descent in Continuous Time

Justin Sirignano, Konstantinos Spiliopoulos

Published 2016-11-17Version 1

We consider stochastic gradient descent for continuous-time models. Traditional approaches for the statistical estimation of continuous-time models, such as batch optimization, can be impractical for large datasets where observations occur over a long period of time. Stochastic gradient descent provides a computationally efficient method for such statistical estimation problems. The stochastic gradient descent algorithm performs an online parameter update in continuous time, with the parameter updates satisfying a stochastic differential equation. The parameters are proven to converge to a local minimum of a natural objective function for the estimation of the continuous-time dynamics. The convergence proof leverages ergodicity by using an appropriate Poisson equation to help describe the evolution of the parameters for large times. Numerical analysis of the stochastic gradient descent algorithm is presented for several examples, including the Ornstein-Uhlenbeck process, Burger's stochastic partial differential equation, and reinforcement learning.

Related articles: Most relevant | Search more
arXiv:1409.6773 [math.PR] (Published 2014-09-23)
On a Stopping Game in continuous time
arXiv:1208.4922 [math.PR] (Published 2012-08-24, updated 2013-06-18)
Martingale Optimal Transport and Robust Hedging in Continuous Time
arXiv:math/0508451 [math.PR] (Published 2005-08-24)
On the power of two choices: Balls and bins in continuous time