arXiv Analytics

Sign in

arXiv:2406.18787 [cs.LG]AbstractReferencesReviewsResources

Unified Uncertainties: Combining Input, Data and Model Uncertainty into a Single Formulation

Matias Valdenegro-Toro, Ivo Pascal de Jong, Marco Zullich

Published 2024-06-26Version 1

Modelling uncertainty in Machine Learning models is essential for achieving safe and reliable predictions. Most research on uncertainty focuses on output uncertainty (predictions), but minimal attention is paid to uncertainty at inputs. We propose a method for propagating uncertainty in the inputs through a Neural Network that is simultaneously able to estimate input, data, and model uncertainty. Our results show that this propagation of input uncertainty results in a more stable decision boundary even under large amounts of input noise than comparatively simple Monte Carlo sampling. Additionally, we discuss and demonstrate that input uncertainty, when propagated through the model, results in model uncertainty at the outputs. The explicit incorporation of input uncertainty may be beneficial in situations where the amount of input uncertainty is known, though good datasets for this are still needed.

Comments: 4 pages, 3 figures, with appendix. LatinX in AI Research Workshop @ ICML 2024 Camera Ready
Categories: cs.LG, stat.ML
Related articles: Most relevant | Search more
arXiv:2008.10117 [cs.LG] (Published 2020-08-23)
Collaborative Filtering under Model Uncertainty
arXiv:2001.11495 [cs.LG] (Published 2020-01-30)
Towards a Kernel based Physical Interpretation of Model Uncertainty
arXiv:2105.04249 [cs.LG] (Published 2021-05-10)
Accounting for Model Uncertainty in Algorithmic Discrimination