Cargando…

Two-Stream Retentive Long Short-Term Memory Network for Dense Action Anticipation

Analyzing and understanding human actions in long-range videos has promising applications, such as video surveillance, automatic driving, and efficient human-computer interaction. Most researches focus on short-range videos that predict a single action in an ongoing video or forecast an action sever...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhao, Fengda, Zhao, Jiuhan, Li, Xianshan, Zhang, Yinghui, Guo, Dingding, Chen, Wenbai
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9126708/
https://www.ncbi.nlm.nih.gov/pubmed/35615551
http://dx.doi.org/10.1155/2022/4260247
Descripción
Sumario:Analyzing and understanding human actions in long-range videos has promising applications, such as video surveillance, automatic driving, and efficient human-computer interaction. Most researches focus on short-range videos that predict a single action in an ongoing video or forecast an action several seconds earlier before it occurs. In this work, a novel method is proposed to forecast a series of actions and their durations after observing a partial video. This method extracts features from both frame sequences and label sequences. A retentive memory module is introduced to richly extract features at salient time steps and pivotal channels. Extensive experiments are conducted on the Breakfast data set and 50 Salads data set. Compared to the state-of-the-art methods, the method achieves comparable performance in most cases.