arXiv Analytics

Sign in

arXiv:1907.07904 [cs.LG]AbstractReferencesReviewsResources

On the relation between Loss Functions and T-Norms

Francesco Giannini, Giuseppe Marra, Michelangelo Diligenti, Marco Maggini, Marco Gori

Published 2019-07-18Version 1

Deep learning has been shown to achieve impressive results in several domains like computer vision and natural language processing. A key element of this success has been the development of new loss functions, like the popular cross-entropy loss, which has been shown to provide faster convergence and to reduce the vanishing gradient problem in very deep structures. While the cross-entropy loss is usually justified from a probabilistic perspective, this paper shows an alternative and more direct interpretation of this loss in terms of t-norms and their associated generator functions, and derives a general relation between loss functions and t-norms. In particular, the presented work shows intriguing results leading to the development of a novel class of loss functions. These losses can be exploited in any supervised learning task and which could lead to faster convergence rates that the commonly employed cross-entropy loss.

Related articles: Most relevant | Search more
arXiv:2203.11242 [cs.LG] (Published 2022-03-21)
A survey on GANs for computer vision: Recent research, analysis and taxonomy
arXiv:1906.08988 [cs.LG] (Published 2019-06-21)
A Fourier Perspective on Model Robustness in Computer Vision
arXiv:2309.10878 [cs.LG] (Published 2023-09-19)
DeepliteRT: Computer Vision at the Edge