Cargando…
Action Recognition Using Close-Up of Maximum Activation and ETRI-Activity3D LivingLab Dataset
The development of action recognition models has shown great performance on various video datasets. Nevertheless, because there is no rich data on target actions in existing datasets, it is insufficient to perform action recognition applications required by industries. To satisfy this requirement, d...
Autores principales: | Kim, Doyoung, Lee, Inwoong, Kim, Dohyung, Lee, Sanghoon |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8539691/ https://www.ncbi.nlm.nih.gov/pubmed/34695988 http://dx.doi.org/10.3390/s21206774 |
Ejemplares similares
-
An Efficient Human Instance-Guided Framework for Video Action Recognition
por: Lee, Inwoong, et al.
Publicado: (2021) -
Complexity of locomotion activities in an outside-of-the-lab wearable motion capture dataset
por: Sharma, Abhishek, et al.
Publicado: (2022) -
DVS Benchmark Datasets for Object Tracking, Action Recognition, and Object Recognition
por: Hu, Yuhuang, et al.
Publicado: (2016) -
TUHAD: Taekwondo Unit Technique Human Action Dataset with Key Frame-Based CNN Action Recognition
por: Lee, Jinkue, et al.
Publicado: (2020) -
Changes in maximum lip-closing force after extraction and nonextraction orthodontic treatments
por: Choi, Tae-Hyun, et al.
Publicado: (2020)