arXiv Analytics

Sign in

arXiv:1708.01289 [cs.LG]AbstractReferencesReviewsResources

Independently Controllable Features

Valentin Thomas, Jules Pondard, Emmanuel Bengio, Marc Sarfati, Philippe Beaudoin, Marie-Jean Meurs, Joelle Pineau, Doina Precup, Yoshua Bengio

Published 2017-08-03Version 1

It has been postulated that a good representation is one that disentangles the underlying explanatory factors of variation. However, it remains an open question what kind of training framework could potentially achieve that. Whereas most previous work focuses on the static setting (e.g., with images), we postulate that some of the causal factors could be discovered if the learner is allowed to interact with its environment. The agent can experiment with different actions and observe their effects. More specifically, we hypothesize that some of these factors correspond to aspects of the environment which are independently controllable, i.e., that there exists a policy and a learnable feature for each such aspect of the environment, such that this policy can yield changes in that feature with minimal changes to other features that explain the statistical variations in the observed data. We propose a specific objective function to find such factors and verify experimentally \remove{in a simple setting} that it can indeed disentangle independently controllable aspects of the environment without any extrinsic reward signal.

Related articles: Most relevant | Search more
arXiv:1907.01285 [cs.LG] (Published 2019-07-02)
Learning the Arrow of Time
arXiv:1911.03731 [cs.LG] (Published 2019-11-09)
Learning Internal Representations
arXiv:2010.00581 [cs.LG] (Published 2020-10-01)
Multi-agent Social Reinforcement Learning Improves Generalization