arXiv Analytics

Sign in

arXiv:2107.07115 [stat.ML]AbstractReferencesReviewsResources

Principal component analysis for Gaussian process posteriors

Hideaki Ishibashi, Shotaro Akaho

Published 2021-07-15Version 1

This paper proposes an extension of principal component analysis for Gaussian process posteriors denoted by GP-PCA. Since GP-PCA estimates a low-dimensional space of GP posteriors, it can be used for meta-learning, which is a framework for improving the precision of a new task by estimating a structure of a set of tasks. The issue is how to define a structure of a set of GPs with an infinite-dimensional parameter, such as coordinate system and a divergence. In this study, we reduce the infiniteness of GP to the finite-dimensional case under the information geometrical framework by considering a space of GP posteriors that has the same prior. In addition, we propose an approximation method of GP-PCA based on variational inference and demonstrate the effectiveness of GP-PCA as meta-learning through experiments.

Related articles: Most relevant | Search more
arXiv:1903.03571 [stat.ML] (Published 2019-03-08)
Rates of Convergence for Sparse Variational Gaussian Process Regression
arXiv:2301.02750 [stat.ML] (Published 2023-01-06)
Principal Component Analysis in Space Forms
arXiv:2106.14238 [stat.ML] (Published 2021-06-27)
Interpretable Network Representation Learning with Principal Component Analysis