arXiv Analytics

Sign in

arXiv:1603.06987 [cs.CV]AbstractReferencesReviewsResources

Knowledge Transfer for Scene-specific Motion Prediction

Lamberto Ballan, Francesco Castaldo, Alexandre Alahi, Francesco Palmieri, Silvio Savarese

Published 2016-03-22Version 1

When given a single frame of the video, humans can not only interpret the content of the scene, but also they are able to forecast the near future. This ability is mostly driven by their rich prior knowledge about the visual world, both in terms of (\emph{i}) the dynamics of moving agents, as well as (\emph{ii}) the semantic of the scene. In this work we exploit the interplay between these two key elements to predict scene-specific motion patterns. First, we extract patch descriptors encoding the probability of moving to the adjacent patches, and the probability of being in that particular patch or changing behavior. Then, we introduce a Dynamic Bayesian Network which exploits this scene specific knowledge for trajectory prediction. Experimental results demonstrate that our method is able to accurately predict trajectories and transfer predictions to a novel scene characterized by similar elements.

Related articles: Most relevant | Search more
arXiv:1004.0085 [cs.CV] (Published 2010-04-01)
A stochastic model of human visual attention with a dynamic Bayesian network
arXiv:2307.16601 [cs.CV] (Published 2023-07-31)
Sampling to Distill: Knowledge Transfer from Open-World Data
Yuzheng Wang et al.
arXiv:2309.08690 [cs.CV] (Published 2023-09-15)
BANSAC: A dynamic BAyesian Network for adaptive SAmple Consensus