{ "id": "1604.08153", "version": "v1", "published": "2016-04-27T17:48:39.000Z", "updated": "2016-04-27T17:48:39.000Z", "title": "Classifying Options for Deep Reinforcement Learning", "authors": [ "Kai Arulkumaran", "Nat Dilokthanakul", "Murray Shanahan", "Anil Anthony Bharath" ], "categories": [ "cs.LG", "cs.AI" ], "abstract": "Deep reinforcement learning is the learning of multiple levels of hierarchical representations for reinforcement learning. Hierarchical reinforcement learning focuses on temporal abstractions in planning and learning, allowing temporally-extended actions to be transferred between tasks. In this paper we combine one method for hierarchical reinforcement learning - the options framework - with deep Q-networks (DQNs) through the use of different \"option heads\" on the policy network, and a supervisory network for choosing between the different options. We show that in a domain where we have prior knowledge of the mapping between states and options, our augmented DQN achieves a policy competitive with that of a standard DQN, but with much lower sample complexity. This is achieved through a straightforward architectural adjustment to the DQN, as well as an additional supervised neural network.", "revisions": [ { "version": "v1", "updated": "2016-04-27T17:48:39.000Z" } ], "analyses": { "keywords": [ "deep reinforcement learning", "classifying options", "lower sample complexity", "additional supervised neural network", "multiple levels" ], "note": { "typesetting": "TeX", "pages": 0, "language": "en", "license": "arXiv", "status": "editable", "adsabs": "2016arXiv160408153A" } } }