arXiv Analytics

Sign in

arXiv:2012.11696 [cs.CV]AbstractReferencesReviewsResources

Image Captioning as an Assistive Technology: Lessons Learned from VizWiz 2020 Challenge

Pierre Dognin, Igor Melnyk, Youssef Mroueh, Inkit Padhi, Mattia Rigotti, Jarret Ross, Yair Schiff, Richard A. Young, Brian Belgodere

Published 2020-12-21Version 1

Image captioning has recently demonstrated impressive progress largely owing to the introduction of neural network algorithms trained on curated dataset like MS-COCO. Often work in this field is motivated by the promise of deployment of captioning systems in practical applications. However, the scarcity of data and contexts in many competition datasets renders the utility of systems trained on these datasets limited as an assistive technology in real-world settings, such as helping visually impaired people navigate and accomplish everyday tasks. This gap motivated the introduction of the novel VizWiz dataset, which consists of images taken by the visually impaired and captions that have useful, task-oriented information. In an attempt to help the machine learning computer vision field realize its promise of producing technologies that have positive social impact, the curators of the VizWiz dataset host several competitions, including one for image captioning. This work details the theory and engineering from our winning submission to the 2020 captioning competition. Our work provides a step towards improved assistive image captioning systems.

Comments: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
Categories: cs.CV, cs.LG
Related articles: Most relevant | Search more
arXiv:2205.14458 [cs.CV] (Published 2022-05-28)
Variational Transformer: A Framework Beyond the Trade-off between Accuracy and Diversity for Image Captioning
arXiv:1604.00790 [cs.CV] (Published 2016-04-04)
Image Captioning with Deep Bidirectional LSTMs
arXiv:1707.07998 [cs.CV] (Published 2017-07-25)
Bottom-Up and Top-Down Attention for Image Captioning and VQA