arXiv Analytics

Sign in

arXiv:2309.02250 [cs.LG]AbstractReferencesReviewsResources

RoBoSS: A Robust, Bounded, Sparse, and Smooth Loss Function for Supervised Learning

Mushir Akhtar, M. Tanveer, Mohd. Arshad

Published 2023-09-05Version 1

In the domain of machine learning algorithms, the significance of the loss function is paramount, especially in supervised learning tasks. It serves as a fundamental pillar that profoundly influences the behavior and efficacy of supervised learning algorithms. Traditional loss functions, while widely used, often struggle to handle noisy and high-dimensional data, impede model interpretability, and lead to slow convergence during training. In this paper, we address the aforementioned constraints by proposing a novel robust, bounded, sparse, and smooth (RoBoSS) loss function for supervised learning. Further, we incorporate the RoBoSS loss function within the framework of support vector machine (SVM) and introduce a new robust algorithm named $\mathcal{L}_{rbss}$-SVM. For the theoretical analysis, the classification-calibrated property and generalization ability are also presented. These investigations are crucial for gaining deeper insights into the performance of the RoBoSS loss function in the classification tasks and its potential to generalize well to unseen data. To empirically demonstrate the effectiveness of the proposed $\mathcal{L}_{rbss}$-SVM, we evaluate it on $88$ real-world UCI and KEEL datasets from diverse domains. Additionally, to exemplify the effectiveness of the proposed $\mathcal{L}_{rbss}$-SVM within the biomedical realm, we evaluated it on two medical datasets: the electroencephalogram (EEG) signal dataset and the breast cancer (BreaKHis) dataset. The numerical results substantiate the superiority of the proposed $\mathcal{L}_{rbss}$-SVM model, both in terms of its remarkable generalization performance and its efficiency in training time.

Journal: IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024
Categories: cs.LG
Related articles: Most relevant | Search more
arXiv:1812.07385 [cs.LG] (Published 2018-12-15)
Perturbation Analysis of Learning Algorithms: A Unifying Perspective on Generation of Adversarial Examples
arXiv:2202.04513 [cs.LG] (Published 2022-02-09)
The no-free-lunch theorems of supervised learning
arXiv:2104.05439 [cs.LG] (Published 2021-04-09)
Tensor Network for Supervised Learning at Finite Temperature