arXiv Analytics

Sign in

arXiv:1804.02541 [cs.CV]AbstractReferencesReviewsResources

Statistical transformer networks: learning shape and appearance models via self supervision

Anil Bas, William A. P. Smith

Published 2018-04-07Version 1

We generalise Spatial Transformer Networks (STN) by replacing the parametric transformation of a fixed, regular sampling grid with a deformable, statistical shape model which is itself learnt. We call this a Statistical Transformer Network (StaTN). By training a network containing a StaTN end-to-end for a particular task, the network learns the optimal nonrigid alignment of the input data for the task. Moreover, the statistical shape model is learnt with no direct supervision (such as landmarks) and can be reused for other tasks. Besides training for a specific task, we also show that a StaTN can learn a shape model using generic loss functions. This includes a loss inspired by the minimum description length principle in which an appearance model is also learnt from scratch. In this configuration, our model learns an active appearance model and a means to fit the model from scratch with no supervision at all, even identity labels.

Related articles: Most relevant | Search more
arXiv:1807.10731 [cs.CV] (Published 2018-07-27)
An Algorithm for Learning Shape and Appearance Models without Annotations
arXiv:1712.01261 [cs.CV] (Published 2017-12-02)
SfSNet : Learning Shape, Reflectance and Illuminance of Faces in the Wild
arXiv:2404.03042 [cs.CV] (Published 2024-04-03)
AWOL: Analysis WithOut synthesis using Language