arXiv Analytics

Sign in

arXiv:2206.07435 [cs.CV]AbstractReferencesReviewsResources

Forecasting of depth and ego-motion with transformers and self-supervision

Houssem Boulahbal, Adrian Voicila, Andrew Comport

Published 2022-06-15Version 1

This paper addresses the problem of end-to-end self-supervised forecasting of depth and ego motion. Given a sequence of raw images, the aim is to forecast both the geometry and ego-motion using a self supervised photometric loss. The architecture is designed using both convolution and transformer modules. This leverages the benefits of both modules: Inductive bias of CNN, and the multi-head attention of transformers, thus enabling a rich spatio-temporal representation that enables accurate depth forecasting. Prior work attempts to solve this problem using multi-modal input/output with supervised ground-truth data which is not practical since a large annotated dataset is required. Alternatively to prior methods, this paper forecasts depth and ego motion using only self-supervised raw images as input. The approach performs significantly well on the KITTI dataset benchmark with several performance criteria being even comparable to prior non-forecasting self-supervised monocular depth inference methods.

Comments: Accepted in ICPR 2022
Categories: cs.CV
Related articles: Most relevant | Search more
arXiv:2112.04632 [cs.CV] (Published 2021-12-09, updated 2022-04-12)
Recurrent Glimpse-based Decoder for Detection with Transformer
arXiv:1905.05092 [cs.CV] (Published 2019-05-13)
Joint demosaicing and denoising by overfitting of bursts of raw images
arXiv:2101.10203 [cs.CV] (Published 2021-01-25)
ISP Distillation