arXiv Analytics

Sign in

arXiv:2409.13453 [math.NA]AbstractReferencesReviewsResources

Data Compression using Rank-1 Lattices for Parameter Estimation in Machine Learning

Michael Gnewuch, Kumar Harsha, Marcin Wnuk

Published 2024-09-20Version 1

The mean squared error and regularized versions of it are standard loss functions in supervised machine learning. However, calculating these losses for large data sets can be computationally demanding. Modifying an approach of J. Dick and M. Feischl [Journal of Complexity 67 (2021)], we present algorithms to reduce extensive data sets to a smaller size using rank-1 lattices. Rank-1 lattices are quasi-Monte Carlo (QMC) point sets that are, if carefully chosen, well-distributed in a multidimensional unit cube. The compression strategy in the preprocessing step assigns every lattice point a pair of weights depending on the original data and responses, representing its relative importance. As a result, the compressed data makes iterative loss calculations in optimization steps much faster. We analyze the errors of our QMC data compression algorithms and the cost of the preprocessing step for functions whose Fourier coefficients decay sufficiently fast so that they lie in certain Wiener algebras or Korobov spaces. In particular, we prove that our approach can lead to arbitrary high convergence rates as long as the functions are sufficiently smooth.

Related articles: Most relevant | Search more
arXiv:2009.14596 [math.NA] (Published 2020-09-23)
Machine Learning and Computational Mathematics
arXiv:2009.02687 [math.NA] (Published 2020-09-06)
Nonlinear reduced models for state and parameter estimation
arXiv:2410.12654 [math.NA] (Published 2024-10-16)
A comparative analysis of metamodels for lumped cardiovascular models, and pipeline for sensitivity analysis, parameter estimation, and uncertainty quantification