arXiv Analytics

Sign in

arXiv:2004.08763 [cs.LG]AbstractReferencesReviewsResources

Model-Predictive Control via Cross-Entropy and Gradient-Based Optimization

Homanga Bharadhwaj, Kevin Xie, Florian Shkurti

Published 2020-04-19Version 1

Recent works in high-dimensional model-predictive control and model-based reinforcement learning with learned dynamics and reward models have resorted to population-based optimization methods, such as the Cross-Entropy Method (CEM), for planning a sequence of actions. To decide on an action to take, CEM conducts a search for the action sequence with the highest return according to the dynamics model and reward. Action sequences are typically randomly sampled from an unconditional Gaussian distribution and evaluated on the environment. This distribution is iteratively updated towards action sequences with higher returns. However, this planning method can be very inefficient, especially for high-dimensional action spaces. An alternative line of approaches optimize action sequences directly via gradient descent, but are prone to local optima. We propose a method to solve this planning problem by interleaving CEM and gradient descent steps in optimizing the action sequence. Our experiments show faster convergence of the proposed hybrid approach, even for high-dimensional action spaces, avoidance of local minima, and better or equal performance to CEM. Code accompanying the paper is available here https://github.com/homangab/gradcem.

Comments: L4DC 2020; Accepted for presentation in the 2nd Annual Conference on Learning for Dynamics and Control
Categories: cs.LG, cs.AI, cs.RO, stat.ML
Related articles: Most relevant | Search more
arXiv:1909.09501 [cs.LG] (Published 2019-09-20)
Trivializations for Gradient-Based Optimization on Manifolds
arXiv:2405.06312 [cs.LG] (Published 2024-05-10)
FedGCS: A Generative Framework for Efficient Client Selection in Federated Learning via Gradient-based Optimization
Zhiyuan Ning et al.
arXiv:2402.01879 [cs.LG] (Published 2024-02-02, updated 2024-10-02)
$σ$-zero: Gradient-based Optimization of $\ell_0$-norm Adversarial Examples