arXiv:1710.06836 [cs.CV]AbstractReferencesReviewsResources
Using Deep Convolutional Networks for Gesture Recognition in American Sign Language
Published 2017-10-18Version 1
In the realm of multimodal communication, sign language is, and continues to be, one of the most understudied areas. In line with recent advances in the field of deep learning, there are far reaching implications and applications that neural networks can have for sign language interpretation. In this paper, we present a method for using deep convolutional networks to classify images of both the the letters and digits in American Sign Language.
Comments: 12 figures
Categories: cs.CV
Related articles: Most relevant | Search more
arXiv:1602.00020 [cs.CV] (Published 2016-01-29)
Deep convolutional networks for automated detection of posterior-element fractures on spine CT
arXiv:2308.12006 [cs.CV] (Published 2023-08-23)
Multi-stage Factorized Spatio-Temporal Representation for RGB-D Action and Gesture Recognition
arXiv:2203.13291 [cs.CV] (Published 2022-03-24)
Searching for fingerspelled content in American Sign Language