arXiv Analytics

Sign in

arXiv:2006.14264 [cs.CV]AbstractReferencesReviewsResources

Self-Segregating and Coordinated-Segregating Transformer for Focused Deep Multi-Modular Network for Visual Question Answering

Chiranjib Sur

Published 2020-06-25Version 1

Attention mechanism has gained huge popularity due to its effectiveness in achieving high accuracy in different domains. But attention is opportunistic and is not justified by the content or usability of the content. Transformer like structure creates all/any possible attention(s). We define segregating strategies that can prioritize the contents for the applications for enhancement of performance. We defined two strategies: Self-Segregating Transformer (SST) and Coordinated-Segregating Transformer (CST) and used it to solve visual question answering application. Self-segregation strategy for attention contributes in better understanding and filtering the information that can be most helpful for answering the question and create diversity of visual-reasoning for attention. This work can easily be used in many other applications that involve repetition and multiple frames of features and would reduce the commonality of the attentions to a great extent. Visual Question Answering (VQA) requires understanding and coordination of both images and textual interpretations. Experiments demonstrate that segregation strategies for cascaded multi-head transformer attention outperforms many previous works and achieved considerable improvement for VQA-v2 dataset benchmark.

Related articles: Most relevant | Search more
arXiv:1512.02167 [cs.CV] (Published 2015-12-07)
Simple Baseline for Visual Question Answering
arXiv:1708.02711 [cs.CV] (Published 2017-08-09)
Tips and Tricks for Visual Question Answering: Learnings from the 2017 Challenge
arXiv:1606.00061 [cs.CV] (Published 2016-05-31)
Hierarchical Co-Attention for Visual Question Answering