arXiv Analytics

Sign in

arXiv:2206.13078 [cs.CV]AbstractReferencesReviewsResources

Video2StyleGAN: Encoding Video in Latent Space for Manipulation

Jiyang Yu, Jingen Liu, Jing Huang, Wei Zhang, Tao Mei

Published 2022-06-27Version 1

Many recent works have been proposed for face image editing by leveraging the latent space of pretrained GANs. However, few attempts have been made to directly apply them to videos, because 1) they do not guarantee temporal consistency, 2) their application is limited by their processing speed on videos, and 3) they cannot accurately encode details of face motion and expression. To this end, we propose a novel network to encode face videos into the latent space of StyleGAN for semantic face video manipulation. Based on the vision transformer, our network reuses the high-resolution portion of the latent vector to enforce temporal consistency. To capture subtle face motions and expressions, we design novel losses that involve sparse facial landmarks and dense 3D face mesh. We have thoroughly evaluated our approach and successfully demonstrated its application to various face video manipulations. Particularly, we propose a novel network for pose/expression control in a 3D coordinate system. Both qualitative and quantitative results have shown that our approach can significantly outperform existing single image methods, while achieving real-time (66 fps) speed.

Related articles: Most relevant | Search more
arXiv:2202.12929 [cs.CV] (Published 2022-02-25)
OptGAN: Optimizing and Interpreting the Latent Space of the Conditional Text-to-Image GANs
arXiv:2011.00954 [cs.CV] (Published 2020-11-02)
Learning a Deep Reinforcement Learning Policy Over the Latent Space of a Pre-trained GAN for Semantic Age Manipulation
arXiv:2007.06600 [cs.CV] (Published 2020-07-13)
Closed-Form Factorization of Latent Semantics in GANs