arXiv Analytics

Sign in

arXiv:2107.02170 [cs.CV]AbstractReferencesReviewsResources

On Model Calibration for Long-Tailed Object Detection and Instance Segmentation

Tai-Yu Pan, Cheng Zhang, Yandong Li, Hexiang Hu, Dong Xuan, Soravit Changpinyo, Boqing Gong, Wei-Lun Chao

Published 2021-07-05Version 1

Vanilla models for object detection and instance segmentation suffer from the heavy bias toward detecting frequent objects in the long-tailed setting. Existing methods address this issue mostly during training, e.g., by re-sampling or re-weighting. In this paper, we investigate a largely overlooked approach -- post-processing calibration of confidence scores. We propose NorCal, Normalized Calibration for long-tailed object detection and instance segmentation, a simple and straightforward recipe that reweighs the predicted scores of each class by its training sample size. We show that separately handling the background class and normalizing the scores over classes for each proposal are keys to achieving superior performance. On the LVIS dataset, NorCal can effectively improve nearly all the baseline models not only on rare classes but also on common and frequent classes. Finally, we conduct extensive analysis and ablation studies to offer insights into various modeling choices and mechanisms of our approach.

Related articles: Most relevant | Search more
arXiv:2012.08548 [cs.CV] (Published 2020-12-15)
Equalization Loss v2: A New Gradient Balance Approach for Long-tailed Object Detection
arXiv:2303.06268 [cs.CV] (Published 2023-03-11, updated 2024-01-13)
Trust your neighbours: Penalty-based constraints for model calibration
arXiv:2310.12152 [cs.CV] (Published 2023-10-18)
Learning from Rich Semantics and Coarse Locations for Long-tailed Object Detection