arXiv Analytics

Sign in

arXiv:2303.13462 [quant-ph]AbstractReferencesReviewsResources

Generalization with quantum geometry for learning unitaries

Tobias Haug, M. S. Kim

Published 2023-03-23Version 1

Generalization is the ability of quantum machine learning models to make accurate predictions on new data by learning from training data. Here, we introduce the data quantum Fisher information metric (DQFIM) to determine when a model can generalize. For variational learning of unitaries, the DQFIM quantifies the amount of circuit parameters and training data needed to successfully train and generalize. We apply the DQFIM to explain when a constant number of training states and polynomial number of parameters are sufficient for generalization. Further, we can improve generalization by removing symmetries from training data. Finally, we show that out-of-distribution generalization, where training and testing data are drawn from different data distributions, can be better than using the same distribution. Our work opens up new approaches to improve generalization in quantum machine learning.

Related articles: Most relevant | Search more
arXiv:2309.09815 [quant-ph] (Published 2023-09-18)
On The Stabilizer Formalism And Its Generalization
arXiv:2210.03421 [quant-ph] (Published 2022-10-07)
Semi-quantum private comparison and its generalization to the key agreement, summation, and anonymous ranking
arXiv:2102.08991 [quant-ph] (Published 2021-02-17)
Generalization in Quantum Machine Learning: a Quantum Information Perspective