arXiv Analytics

Sign in

arXiv:2108.05312 [cs.CV]AbstractReferencesReviewsResources

Towards Interpretable Deep Networks for Monocular Depth Estimation

Zunzhi You, Yi-Hsuan Tsai, Wei-Chen Chiu, Guanbin Li

Published 2021-08-11Version 1

Deep networks for Monocular Depth Estimation (MDE) have achieved promising performance recently and it is of great importance to further understand the interpretability of these networks. Existing methods attempt to provide posthoc explanations by investigating visual cues, which may not explore the internal representations learned by deep networks. In this paper, we find that some hidden units of the network are selective to certain ranges of depth, and thus such behavior can be served as a way to interpret the internal representations. Based on our observations, we quantify the interpretability of a deep MDE network by the depth selectivity of its hidden units. Moreover, we then propose a method to train interpretable MDE deep networks without changing their original architectures, by assigning a depth range for each unit to select. Experimental results demonstrate that our method is able to enhance the interpretability of deep MDE networks by largely improving the depth selectivity of their units, while not harming or even improving the depth estimation accuracy. We further provide a comprehensive analysis to show the reliability of selective units, the applicability of our method on different layers, models, and datasets, and a demonstration on analysis of model error. Source code and models are available at https://github.com/youzunzhi/InterpretableMDE .

Related articles: Most relevant | Search more
arXiv:2010.13118 [cs.CV] (Published 2020-10-25)
Monocular Depth Estimation via Listwise Ranking using the Plackett-Luce model
arXiv:2207.04718 [cs.CV] (Published 2022-07-11)
Physical Attack on Monocular Depth Estimation with Optimal Adversarial Patches
arXiv:2309.14137 [cs.CV] (Published 2023-09-25)
IEBins: Iterative Elastic Bins for Monocular Depth Estimation