arXiv Analytics

Sign in

arXiv:2304.01636 [cs.CV]AbstractReferencesReviewsResources

Label-guided Attention Distillation for Lane Segmentation

Zhikang Liu, Lanyun Zhu

Published 2023-04-04Version 1

Contemporary segmentation methods are usually based on deep fully convolutional networks (FCNs). However, the layer-by-layer convolutions with a growing receptive field is not good at capturing long-range contexts such as lane markers in the scene. In this paper, we address this issue by designing a distillation method that exploits label structure when training segmentation network. The intuition is that the ground-truth lane annotations themselves exhibit internal structure. We broadcast the structure hints throughout a teacher network, i.e., we train a teacher network that consumes a lane label map as input and attempts to replicate it as output. Then, the attention maps of the teacher network are adopted as supervisors of the student segmentation network. The teacher network, with label structure information embedded, knows distinctly where the convolution layers should pay visual attention into. The proposed method is named as Label-guided Attention Distillation (LGAD). It turns out that the student network learns significantly better with LGAD than when learning alone. As the teacher network is deprecated after training, our method do not increase the inference time. Note that LGAD can be easily incorporated in any lane segmentation network.

Comments: Accepted to Neurocomputing 2021
Journal: Elsevier Neurocomputing, vol.438, May 2021, pp. 312-322
Categories: cs.CV
Related articles: Most relevant | Search more
arXiv:1710.09505 [cs.CV] (Published 2017-10-26)
Knowledge Projection for Deep Neural Networks
arXiv:2409.07694 [cs.CV] (Published 2024-09-12)
Learn from Balance: Rectifying Knowledge Transfer for Long-Tailed Scenarios
arXiv:2303.05073 [cs.CV] (Published 2023-03-09, updated 2023-08-15)
Learn More for Food Recognition via Progressive Self-Distillation