Cargando…

Body and Hand–Object ROI-Based Behavior Recognition Using Deep Learning

Behavior recognition has applications in automatic crime monitoring, automatic sports video analysis, and context awareness of so-called silver robots. In this study, we employ deep learning to recognize behavior based on body and hand–object interaction regions of interest (ROIs). We propose an ROI...

Descripción completa

Detalles Bibliográficos
Autores principales: Byeon, Yeong-Hyeon, Kim, Dohyung, Lee, Jaeyeon, Kwak, Keun-Chang
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7961580/
https://www.ncbi.nlm.nih.gov/pubmed/33800776
http://dx.doi.org/10.3390/s21051838
_version_ 1783665292238389248
author Byeon, Yeong-Hyeon
Kim, Dohyung
Lee, Jaeyeon
Kwak, Keun-Chang
author_facet Byeon, Yeong-Hyeon
Kim, Dohyung
Lee, Jaeyeon
Kwak, Keun-Chang
author_sort Byeon, Yeong-Hyeon
collection PubMed
description Behavior recognition has applications in automatic crime monitoring, automatic sports video analysis, and context awareness of so-called silver robots. In this study, we employ deep learning to recognize behavior based on body and hand–object interaction regions of interest (ROIs). We propose an ROI-based four-stream ensemble convolutional neural network (CNN). Behavior recognition data are mainly composed of images and skeletons. The first stream uses a pre-trained 2D-CNN by converting the 3D skeleton sequence into pose evolution images (PEIs). The second stream inputs the RGB video into the 3D-CNN to extract temporal and spatial features. The most important information in behavior recognition is identification of the person performing the action. Therefore, if the neural network is trained by removing ambient noise and placing the ROI on the person, feature analysis can be performed by focusing on the behavior itself rather than learning the entire region. Therefore, the third stream inputs the RGB video limited to the body-ROI into the 3D-CNN. The fourth stream inputs the RGB video limited to ROIs of hand–object interactions into the 3D-CNN. Finally, because better performance is expected by combining the information of the models trained with attention to these ROIs, better recognition will be possible through late fusion of the four stream scores. The Electronics and Telecommunications Research Institute (ETRI)-Activity3D dataset was used for the experiments. This dataset contains color images, images of skeletons, and depth images of 55 daily behaviors of 50 elderly and 50 young individuals. The experimental results showed that the proposed model improved recognition by at least 4.27% and up to 20.97% compared to other behavior recognition methods.
format Online
Article
Text
id pubmed-7961580
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-79615802021-03-17 Body and Hand–Object ROI-Based Behavior Recognition Using Deep Learning Byeon, Yeong-Hyeon Kim, Dohyung Lee, Jaeyeon Kwak, Keun-Chang Sensors (Basel) Article Behavior recognition has applications in automatic crime monitoring, automatic sports video analysis, and context awareness of so-called silver robots. In this study, we employ deep learning to recognize behavior based on body and hand–object interaction regions of interest (ROIs). We propose an ROI-based four-stream ensemble convolutional neural network (CNN). Behavior recognition data are mainly composed of images and skeletons. The first stream uses a pre-trained 2D-CNN by converting the 3D skeleton sequence into pose evolution images (PEIs). The second stream inputs the RGB video into the 3D-CNN to extract temporal and spatial features. The most important information in behavior recognition is identification of the person performing the action. Therefore, if the neural network is trained by removing ambient noise and placing the ROI on the person, feature analysis can be performed by focusing on the behavior itself rather than learning the entire region. Therefore, the third stream inputs the RGB video limited to the body-ROI into the 3D-CNN. The fourth stream inputs the RGB video limited to ROIs of hand–object interactions into the 3D-CNN. Finally, because better performance is expected by combining the information of the models trained with attention to these ROIs, better recognition will be possible through late fusion of the four stream scores. The Electronics and Telecommunications Research Institute (ETRI)-Activity3D dataset was used for the experiments. This dataset contains color images, images of skeletons, and depth images of 55 daily behaviors of 50 elderly and 50 young individuals. The experimental results showed that the proposed model improved recognition by at least 4.27% and up to 20.97% compared to other behavior recognition methods. MDPI 2021-03-06 /pmc/articles/PMC7961580/ /pubmed/33800776 http://dx.doi.org/10.3390/s21051838 Text en © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Byeon, Yeong-Hyeon
Kim, Dohyung
Lee, Jaeyeon
Kwak, Keun-Chang
Body and Hand–Object ROI-Based Behavior Recognition Using Deep Learning
title Body and Hand–Object ROI-Based Behavior Recognition Using Deep Learning
title_full Body and Hand–Object ROI-Based Behavior Recognition Using Deep Learning
title_fullStr Body and Hand–Object ROI-Based Behavior Recognition Using Deep Learning
title_full_unstemmed Body and Hand–Object ROI-Based Behavior Recognition Using Deep Learning
title_short Body and Hand–Object ROI-Based Behavior Recognition Using Deep Learning
title_sort body and hand–object roi-based behavior recognition using deep learning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7961580/
https://www.ncbi.nlm.nih.gov/pubmed/33800776
http://dx.doi.org/10.3390/s21051838
work_keys_str_mv AT byeonyeonghyeon bodyandhandobjectroibasedbehaviorrecognitionusingdeeplearning
AT kimdohyung bodyandhandobjectroibasedbehaviorrecognitionusingdeeplearning
AT leejaeyeon bodyandhandobjectroibasedbehaviorrecognitionusingdeeplearning
AT kwakkeunchang bodyandhandobjectroibasedbehaviorrecognitionusingdeeplearning