Cargando…

Compositional action recognition with multi-view feature fusion

Most action recognition tasks now treat the activity as a single event in a video clip. Recently, the benefits of representing activities as a combination of verbs and nouns for action recognition have shown to be effective in improving action understanding, allowing us to capture such representatio...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhao, Zhicheng, Liu, Yingan, Ma, Lei
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9009598/
https://www.ncbi.nlm.nih.gov/pubmed/35421122
http://dx.doi.org/10.1371/journal.pone.0266259
Descripción
Sumario:Most action recognition tasks now treat the activity as a single event in a video clip. Recently, the benefits of representing activities as a combination of verbs and nouns for action recognition have shown to be effective in improving action understanding, allowing us to capture such representations. However, there is still a lack of research on representational learning using cross-view or cross-modality information. To exploit the complementary information between multiple views, we propose a feature fusion framework, and our framework is divided into two steps: extraction of appearance features and fusion of multi-view features. We validate our approach on two action recognition datasets, IKEA ASM and LEMMA. We demonstrate that multi-view fusion can effectively generalize across appearances and identify previously unseen actions of interacting objects, surpassing current state-of-the-art methods. In particular, on the IKEA ASM dataset, the performance of the multi-view fusion approach improves 18.1% over the performance of the single-view approach on top-1.