arXiv Analytics

Sign in

arXiv:1904.09489 [cs.LG]AbstractReferencesReviewsResources

Compression and Localization in Reinforcement Learning for ATARI Games

Joel Ruben Antony Moniz, Barun Patra, Sarthak Garg

Published 2019-04-20Version 1

Deep neural networks have become commonplace in the domain of reinforcement learning, but are often expensive in terms of the number of parameters needed. While compressing deep neural networks has of late assumed great importance to overcome this drawback, little work has been done to address this problem in the context of reinforcement learning agents. This work aims at making first steps towards model compression in an RL agent. In particular, we compress networks to drastically reduce the number of parameters in them (to sizes less than 3% of their original size), further facilitated by applying a global max pool after the final convolution layer, and propose using Actor-Mimic in the context of compression. Finally, we show that this global max-pool allows for weakly supervised object localization, improving the ability to identify the agent's points of focus.

Comments: NeurIPS 2018 Deep Reinforcement Learning Workshop
Categories: cs.LG, cs.AI, stat.ML
Related articles: Most relevant | Search more
arXiv:2007.14917 [cs.LG] (Published 2020-07-29)
Compressing Deep Neural Networks via Layer Fusion
arXiv:1507.08750 [cs.LG] (Published 2015-07-31)
Action-Conditional Video Prediction using Deep Networks in Atari Games
arXiv:1902.00566 [cs.LG] (Published 2019-02-01)
Visual Rationalizations in Deep Reinforcement Learning for Atari Games