Cargando…

An Intelligent Non-Invasive Real-Time Human Activity Recognition System for Next-Generation Healthcare

Human motion detection is getting considerable attention in the field of Artificial Intelligence (AI) driven healthcare systems. Human motion can be used to provide remote healthcare solutions for vulnerable people by identifying particular movements such as falls, gait and breathing disorders. This...

Descripción completa

Detalles Bibliográficos
Autores principales: Taylor, William, Shah, Syed Aziz, Dashtipour, Kia, Zahid, Adnan, Abbasi, Qammer H., Imran, Muhammad Ali
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7248832/
https://www.ncbi.nlm.nih.gov/pubmed/32384716
http://dx.doi.org/10.3390/s20092653
Descripción
Sumario:Human motion detection is getting considerable attention in the field of Artificial Intelligence (AI) driven healthcare systems. Human motion can be used to provide remote healthcare solutions for vulnerable people by identifying particular movements such as falls, gait and breathing disorders. This can allow people to live more independent lifestyles and still have the safety of being monitored if more direct care is needed. At present wearable devices can provide real-time monitoring by deploying equipment on a person’s body. However, putting devices on a person’s body all the time makes it uncomfortable and the elderly tend to forget to wear them, in addition to the insecurity of being tracked all the time. This paper demonstrates how human motions can be detected in a quasi-real-time scenario using a non-invasive method. Patterns in the wireless signals present particular human body motions as each movement induces a unique change in the wireless medium. These changes can be used to identify particular body motions. This work produces a dataset that contains patterns of radio wave signals obtained using software-defined radios (SDRs) to establish if a subject is standing up or sitting down as a test case. The dataset was used to create a machine learning model, which was used in a developed application to provide a quasi-real-time classification of standing or sitting state. The machine-learning model was able to achieve 96.70% accuracy using the Random Forest algorithm using 10 fold cross-validation. A benchmark dataset of wearable devices was compared to the proposed dataset and results showed the proposed dataset to have similar accuracy of nearly 90%. The machine-learning models developed in this paper are tested for two activities but the developed system is designed and applicable for detecting and differentiating x number of activities.