arXiv Analytics

Sign in

arXiv:2310.16996 [cs.LG]AbstractReferencesReviewsResources

Towards Continually Learning Application Performance Models

Ray A. O. Sinurat, Anurag Daram, Haryadi S. Gunawi, Robert B. Ross, Sandeep Madireddy

Published 2023-10-25Version 1

Machine learning-based performance models are increasingly being used to build critical job scheduling and application optimization decisions. Traditionally, these models assume that data distribution does not change as more samples are collected over time. However, owing to the complexity and heterogeneity of production HPC systems, they are susceptible to hardware degradation, replacement, and/or software patches, which can lead to drift in the data distribution that can adversely affect the performance models. To this end, we develop continually learning performance models that account for the distribution drift, alleviate catastrophic forgetting, and improve generalizability. Our best model was able to retain accuracy, regardless of having to learn the new distribution of data inflicted by system changes, while demonstrating a 2x improvement in the prediction accuracy of the whole data sequence in comparison to the naive approach.

Comments: Presented at Workshop on Machine Learning for Systems at 36th Conference on Neural Information Processing Systems (NeurIPS 2022)
Categories: cs.LG, cs.DC
Related articles: Most relevant | Search more
arXiv:2405.06627 [cs.LG] (Published 2024-05-10)
Conformal Validity Guarantees Exist for Any Data Distribution
arXiv:1709.04073 [cs.LG] (Published 2017-09-12)
Linear Stochastic Approximation: Constant Step-Size and Iterate Averaging
arXiv:2302.10688 [cs.LG] (Published 2023-02-21)
On Calibrating Diffusion Probabilistic Models