arXiv Analytics

Sign in

arXiv:2205.10662 [cs.LG]AbstractReferencesReviewsResources

Equivariant Mesh Attention Networks

Sourya Basu, Jose Gallego-Posada, Francesco Viganò, James Rowbottom, Taco Cohen

Published 2022-05-21Version 1

Equivariance to symmetries has proven to be a powerful inductive bias in deep learning research. Recent works on mesh processing have concentrated on various kinds of natural symmetries, including translations, rotations, scaling, node permutations, and gauge transformations. To date, no existing architecture is equivariant to all of these transformations. Moreover, previous implementations have not always applied these symmetry transformations to the test dataset. This inhibits the ability to determine whether the model attains the claimed equivariance properties. In this paper, we present an attention-based architecture for mesh data that is provably equivariant to all transformations mentioned above. We carry out experiments on the FAUST and TOSCA datasets, and apply the mentioned symmetries to the test set only. Our results confirm that our proposed architecture is equivariant, and therefore robust, to these local/global transformations.

Comments: Implementation can be found at: https://github.com/gallego-posada/eman
Categories: cs.LG, cs.CV, stat.ML
Related articles: Most relevant | Search more
arXiv:2308.06780 [cs.LG] (Published 2023-08-13)
Neural Networks at a Fraction with Pruned Quaternions
arXiv:1909.13241 [cs.LG] (Published 2019-09-29)
Context agnostic trajectory prediction based on $λ$-architecture
arXiv:1511.05497 [cs.LG] (Published 2015-11-17)
Learning the Architecture of Deep Neural Networks