arXiv Analytics

Sign in

arXiv:1709.06505 [cs.CV]AbstractReferencesReviewsResources

SalNet360: Saliency Maps for omni-directional images with CNN

Rafael Monroy, Sebastian Lutz, Tejo Chalasani, Aljosa Smolic

Published 2017-09-19Version 1

The prediction of Visual Attention data from any kind of media is of valuable use to content creators and used to efficiently drive encoding algorithms. With the current trend in the Virtual Reality (VR) field, adapting known techniques to this new kind of media is starting to gain momentum. In this paper, we present an architectural extension to any Convolutional Neural Network (CNN) to fine-tune traditional 2D saliency prediction to Omnidirectional Images (ODIs) in an end-to-end manner. We show that each step in the proposed pipeline works towards making the generated saliency map more accurate with respect to ground truth data.

Related articles: Most relevant | Search more
arXiv:2407.08546 [cs.CV] (Published 2024-07-11)
Quantitative Evaluation of the Saliency Map for Alzheimer's Disease Classifier with Anatomical Segmentation
arXiv:1803.01314 [cs.CV] (Published 2018-03-04)
Training Deep Learning based Denoisers without Ground Truth Data
arXiv:1610.06987 [cs.CV] (Published 2016-10-22)
Multitask Learning of Vegetation Biochemistry from Hyperspectral Data