arXiv Analytics

Sign in

arXiv:1812.02041 [cs.CV]AbstractReferencesReviewsResources

Learn to See by Events: RGB Frame Synthesis from Event Cameras

Stefano Pini, Guido Borghi, Roberto Vezzani, Rita Cucchiara

Published 2018-12-05Version 1

Event cameras are biologically-inspired sensors that gather the temporal evolution of the scene, capturing only pixel-wise brightness variations. Despite having multiple advantages with respect to traditional cameras, their use is still limited due to the difficult intelligibility and restricted usability through traditional vision algorithms. To this aim, we present a framework which exploits the output of event cameras to synthesize RGB frames. In particular, the frame generation relies on an initial or a periodic set of color key-frames and a sequence of intermediate event frames, i.e. gray-level images that integrate the brightness changes captured by the event camera during a short temporal slot. An adversarial architecture combined with a recurrent module is employed for the frame synthesis. Both traditional and event-based datasets are adopted to assess the capabilities of the proposed architecture: pixel-wise and semantic metrics confirm the quality of the synthesized images.

Related articles: Most relevant | Search more
arXiv:2211.09078 [cs.CV] (Published 2022-11-16)
Learning Dense and Continuous Optical Flow from an Event Camera
arXiv:2404.11884 [cs.CV] (Published 2024-04-18)
Seeing Motion at Nighttime with an Event Camera
arXiv:1906.10925 [cs.CV] (Published 2019-06-26)
FA-Harris: A Fast and Asynchronous Corner Detector for Event Cameras