arXiv Analytics

Sign in

arXiv:2007.04364 [cs.CV]AbstractReferencesReviewsResources

Temporal aggregation of audio-visual modalities for emotion recognition

Andreea Birhala, Catalin Nicolae Ristea, Anamaria Radoi, Liviu Cristian Dutu

Published 2020-07-08Version 1

Emotion recognition has a pivotal role in affective computing and in human-computer interaction. The current technological developments lead to increased possibilities of collecting data about the emotional state of a person. In general, human perception regarding the emotion transmitted by a subject is based on vocal and visual information collected in the first seconds of interaction with the subject. As a consequence, the integration of verbal (i.e., speech) and non-verbal (i.e., image) information seems to be the preferred choice in most of the current approaches towards emotion recognition. In this paper, we propose a multimodal fusion technique for emotion recognition based on combining audio-visual modalities from a temporal window with different temporal offsets for each modality. We show that our proposed method outperforms other methods from the literature and human accuracy rating. The experiments are conducted over the open-access multimodal dataset CREMA-D.

Related articles: Most relevant | Search more
arXiv:2403.13731 [cs.CV] (Published 2024-03-19)
Emotion Recognition Using Transformers with Masked Learning
arXiv:2408.01728 [cs.CV] (Published 2024-08-03)
Survey on Emotion Recognition through Posture Detection and the possibility of its application in Virtual Reality
arXiv:2402.01355 [cs.CV] (Published 2024-02-02, updated 2024-06-05)
FindingEmo: An Image Dataset for Emotion Recognition in the Wild