arXiv Analytics

Sign in

arXiv:1901.09054 [cs.LG]AbstractReferencesReviewsResources

Deep Learning on Small Datasets without Pre-Training using Cosine Loss

Björn Barz, Joachim Denzler

Published 2019-01-25Version 1

Two things seem to be indisputable in the contemporary deep learning discourse: 1. The categorical cross-entropy loss after softmax activation is the method of choice for classification. 2. Training a CNN classifier from scratch on small datasets does not work well. In contrast to this, we show that the cosine loss function provides significantly better performance than cross-entropy on datasets with only a handful of samples per class. For example, the accuracy achieved on the CUB-200-2011 dataset without pre-training is by 30% higher than with the cross-entropy loss. Further experiments on four other popular datasets confirm our findings. Moreover, we show that the classification performance can be improved further by integrating prior knowledge in the form of class hierarchies, which is straightforward with the cosine loss.

Related articles: Most relevant | Search more
arXiv:2210.17092 [cs.LG] (Published 2022-10-31)
Confidence-Nets: A Step Towards better Prediction Intervals for regression Neural Networks on small datasets
arXiv:2308.08934 [cs.LG] (Published 2023-08-17)
On Data Imbalance in Molecular Property Prediction with Pre-training
arXiv:1901.09960 [cs.LG] (Published 2019-01-28)
Using Pre-Training Can Improve Model Robustness and Uncertainty