arXiv Analytics

Sign in

arXiv:1910.11104 [cs.CV]AbstractReferencesReviewsResources

Exploiting video sequences for unsupervised disentangling in generative adversarial networks

Facundo Tuesca, Lucas C. Uzal

Published 2019-10-16Version 1

In this work we present an adversarial training algorithm that exploits correlations in video to learn --without supervision-- an image generator model with a disentangled latent space. The proposed methodology requires only a few modifications to the standard algorithm of Generative Adversarial Networks (GAN) and involves training with sets of frames taken from short videos. We train our model over two datasets of face-centered videos which present different people speaking or moving the head: VidTIMIT and YouTube Faces datasets. We found that our proposal allows us to split the generator latent space into two subspaces. One of them controls content attributes, those that do not change along short video sequences. For the considered datasets, this is the identity of the generated face. The other subspace controls motion attributes, those attributes that are observed to change along short videos. We observed that these motion attributes are face expressions, head orientation, lips and eyes movement. The presented experiments provide quantitative and qualitative evidence supporting that the proposed methodology induces a disentangling of this two kinds of attributes in the latent space.

Comments: This preprint is the result of the work done for the undergraduate dissertation of F. Tuesca supervised by L.C. Uzal and presented in June 2018
Categories: cs.CV, cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:1901.11384 [cs.CV] (Published 2019-01-23)
Learning to navigate image manifolds induced by generative adversarial networks for unsupervised video generation
arXiv:1805.11504 [cs.CV] (Published 2018-05-29)
Capturing Variabilities from Computed Tomography Images with Generative Adversarial Networks
arXiv:1904.04751 [cs.CV] (Published 2019-04-09)
User-Controllable Multi-Texture Synthesis with Generative Adversarial Networks