arXiv Analytics

Sign in

arXiv:2309.08247 [cs.LG]AbstractReferencesReviewsResources

A Geometric Perspective on Autoencoders

Yonghyeon Lee

Published 2023-09-15Version 1

This paper presents the geometric aspect of the autoencoder framework, which, despite its importance, has been relatively less recognized. Given a set of high-dimensional data points that approximately lie on some lower-dimensional manifold, an autoencoder learns the \textit{manifold} and its \textit{coordinate chart}, simultaneously. This geometric perspective naturally raises inquiries like "Does a finite set of data points correspond to a single manifold?" or "Is there only one coordinate chart that can represent the manifold?". The responses to these questions are negative, implying that there are multiple solution autoencoders given a dataset. Consequently, they sometimes produce incorrect manifolds with severely distorted latent space representations. In this paper, we introduce recent geometric approaches that address these issues.

Comments: 10 pages, 13 figures, a summary of the contents presented in publications from NeurIPS 2021, ICLR 2022, and TAG-ML at ICML 2023
Categories: cs.LG, cs.AI, cs.CG
Related articles: Most relevant | Search more
arXiv:2011.07318 [cs.LG] (Published 2020-11-14)
A Geometric Perspective on Self-Supervised Policy Adaptation
arXiv:2304.03720 [cs.LG] (Published 2023-04-07)
Representer Theorems for Metric and Preference Learning: A Geometric Perspective
arXiv:2105.02543 [cs.LG] (Published 2021-05-06)
Bayesian Active Learning by Disagreements: A Geometric Perspective