arXiv Analytics

Sign in

arXiv:2207.10425 [cs.CV]AbstractReferencesReviewsResources

KD-MVS: Knowledge Distillation Based Self-supervised Learning for MVS

Yikang Ding, Qingtian Zhu, Xiangyue Liu, Wentao Yuan, Haotian Zhang, CHi Zhang

Published 2022-07-21Version 1

Supervised multi-view stereo (MVS) methods have achieved remarkable progress in terms of reconstruction quality, but suffer from the challenge of collecting large-scale ground-truth depth. In this paper, we propose a novel self-supervised training pipeline for MVS based on knowledge distillation, termed \textit{KD-MVS}, which mainly consists of self-supervised teacher training and distillation-based student training. Specifically, the teacher model is trained in a self-supervised fashion using both photometric and featuremetric consistency. Then we distill the knowledge of the teacher model to the student model through probabilistic knowledge transferring. With the supervision of validated knowledge, the student model is able to outperform its teacher by a large margin. Extensive experiments performed on multiple datasets show our method can even outperform supervised methods.

Related articles: Most relevant | Search more
arXiv:2303.12130 [cs.CV] (Published 2023-03-21)
MV-MR: multi-views and multi-representations for self-supervised learning and knowledge distillation
arXiv:2006.03810 [cs.CV] (Published 2020-06-06)
An Empirical Analysis of the Impact of Data Augmentation on Knowledge Distillation
arXiv:1904.01802 [cs.CV] (Published 2019-04-03)
Correlation Congruence for Knowledge Distillation
Baoyun Peng et al.