Cargando…
Multi-Sensor Data Fusion and CNN-LSTM Model for Human Activity Recognition System
Human activity recognition (HAR) is becoming increasingly important, especially with the growing number of elderly people living at home. However, most sensors, such as cameras, do not perform well in low-light environments. To address this issue, we designed a HAR system that combines a camera and...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10221064/ https://www.ncbi.nlm.nih.gov/pubmed/37430664 http://dx.doi.org/10.3390/s23104750 |
_version_ | 1785049367042850816 |
---|---|
author | Zhou, Haiyang Zhao, Yixin Liu, Yanzhong Lu, Sichao An, Xiang Liu, Qiang |
author_facet | Zhou, Haiyang Zhao, Yixin Liu, Yanzhong Lu, Sichao An, Xiang Liu, Qiang |
author_sort | Zhou, Haiyang |
collection | PubMed |
description | Human activity recognition (HAR) is becoming increasingly important, especially with the growing number of elderly people living at home. However, most sensors, such as cameras, do not perform well in low-light environments. To address this issue, we designed a HAR system that combines a camera and a millimeter wave radar, taking advantage of each sensor and a fusion algorithm to distinguish between confusing human activities and to improve accuracy in low-light settings. To extract the spatial and temporal features contained in the multisensor fusion data, we designed an improved CNN-LSTM model. In addition, three data fusion algorithms were studied and investigated. Compared to camera data in low-light environments, the fusion data significantly improved the HAR accuracy by at least 26.68%, 19.87%, and 21.92% under the data level fusion algorithm, feature level fusion algorithm, and decision level fusion algorithm, respectively. Moreover, the data level fusion algorithm also resulted in a reduction of the best misclassification rate to 2%~6%. These findings suggest that the proposed system has the potential to enhance the accuracy of HAR in low-light environments and to decrease human activity misclassification rates. |
format | Online Article Text |
id | pubmed-10221064 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-102210642023-05-28 Multi-Sensor Data Fusion and CNN-LSTM Model for Human Activity Recognition System Zhou, Haiyang Zhao, Yixin Liu, Yanzhong Lu, Sichao An, Xiang Liu, Qiang Sensors (Basel) Article Human activity recognition (HAR) is becoming increasingly important, especially with the growing number of elderly people living at home. However, most sensors, such as cameras, do not perform well in low-light environments. To address this issue, we designed a HAR system that combines a camera and a millimeter wave radar, taking advantage of each sensor and a fusion algorithm to distinguish between confusing human activities and to improve accuracy in low-light settings. To extract the spatial and temporal features contained in the multisensor fusion data, we designed an improved CNN-LSTM model. In addition, three data fusion algorithms were studied and investigated. Compared to camera data in low-light environments, the fusion data significantly improved the HAR accuracy by at least 26.68%, 19.87%, and 21.92% under the data level fusion algorithm, feature level fusion algorithm, and decision level fusion algorithm, respectively. Moreover, the data level fusion algorithm also resulted in a reduction of the best misclassification rate to 2%~6%. These findings suggest that the proposed system has the potential to enhance the accuracy of HAR in low-light environments and to decrease human activity misclassification rates. MDPI 2023-05-14 /pmc/articles/PMC10221064/ /pubmed/37430664 http://dx.doi.org/10.3390/s23104750 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Zhou, Haiyang Zhao, Yixin Liu, Yanzhong Lu, Sichao An, Xiang Liu, Qiang Multi-Sensor Data Fusion and CNN-LSTM Model for Human Activity Recognition System |
title | Multi-Sensor Data Fusion and CNN-LSTM Model for Human Activity Recognition System |
title_full | Multi-Sensor Data Fusion and CNN-LSTM Model for Human Activity Recognition System |
title_fullStr | Multi-Sensor Data Fusion and CNN-LSTM Model for Human Activity Recognition System |
title_full_unstemmed | Multi-Sensor Data Fusion and CNN-LSTM Model for Human Activity Recognition System |
title_short | Multi-Sensor Data Fusion and CNN-LSTM Model for Human Activity Recognition System |
title_sort | multi-sensor data fusion and cnn-lstm model for human activity recognition system |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10221064/ https://www.ncbi.nlm.nih.gov/pubmed/37430664 http://dx.doi.org/10.3390/s23104750 |
work_keys_str_mv | AT zhouhaiyang multisensordatafusionandcnnlstmmodelforhumanactivityrecognitionsystem AT zhaoyixin multisensordatafusionandcnnlstmmodelforhumanactivityrecognitionsystem AT liuyanzhong multisensordatafusionandcnnlstmmodelforhumanactivityrecognitionsystem AT lusichao multisensordatafusionandcnnlstmmodelforhumanactivityrecognitionsystem AT anxiang multisensordatafusionandcnnlstmmodelforhumanactivityrecognitionsystem AT liuqiang multisensordatafusionandcnnlstmmodelforhumanactivityrecognitionsystem |