Cargando…

Human Action Recognition: A Paradigm of Best Deep Learning Features Selection and Serial Based Extended Fusion

Human action recognition (HAR) has gained significant attention recently as it can be adopted for a smart surveillance system in Multimedia. However, HAR is a challenging task because of the variety of human actions in daily life. Various solutions based on computer vision (CV) have been proposed in...

Descripción completa

Detalles Bibliográficos
Autores principales: Khan, Seemab, Khan, Muhammad Attique, Alhaisoni, Majed, Tariq, Usman, Yong, Hwan-Seung, Armghan, Ammar, Alenezi, Fayadh
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8659437/
https://www.ncbi.nlm.nih.gov/pubmed/34883944
http://dx.doi.org/10.3390/s21237941
_version_ 1784612961969504256
author Khan, Seemab
Khan, Muhammad Attique
Alhaisoni, Majed
Tariq, Usman
Yong, Hwan-Seung
Armghan, Ammar
Alenezi, Fayadh
author_facet Khan, Seemab
Khan, Muhammad Attique
Alhaisoni, Majed
Tariq, Usman
Yong, Hwan-Seung
Armghan, Ammar
Alenezi, Fayadh
author_sort Khan, Seemab
collection PubMed
description Human action recognition (HAR) has gained significant attention recently as it can be adopted for a smart surveillance system in Multimedia. However, HAR is a challenging task because of the variety of human actions in daily life. Various solutions based on computer vision (CV) have been proposed in the literature which did not prove to be successful due to large video sequences which need to be processed in surveillance systems. The problem exacerbates in the presence of multi-view cameras. Recently, the development of deep learning (DL)-based systems has shown significant success for HAR even for multi-view camera systems. In this research work, a DL-based design is proposed for HAR. The proposed design consists of multiple steps including feature mapping, feature fusion and feature selection. For the initial feature mapping step, two pre-trained models are considered, such as DenseNet201 and InceptionV3. Later, the extracted deep features are fused using the Serial based Extended (SbE) approach. Later on, the best features are selected using Kurtosis-controlled Weighted KNN. The selected features are classified using several supervised learning algorithms. To show the efficacy of the proposed design, we used several datasets, such as KTH, IXMAS, WVU, and Hollywood. Experimental results showed that the proposed design achieved accuracies of 99.3%, 97.4%, 99.8%, and 99.9%, respectively, on these datasets. Furthermore, the feature selection step performed better in terms of computational time compared with the state-of-the-art.
format Online
Article
Text
id pubmed-8659437
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-86594372021-12-10 Human Action Recognition: A Paradigm of Best Deep Learning Features Selection and Serial Based Extended Fusion Khan, Seemab Khan, Muhammad Attique Alhaisoni, Majed Tariq, Usman Yong, Hwan-Seung Armghan, Ammar Alenezi, Fayadh Sensors (Basel) Article Human action recognition (HAR) has gained significant attention recently as it can be adopted for a smart surveillance system in Multimedia. However, HAR is a challenging task because of the variety of human actions in daily life. Various solutions based on computer vision (CV) have been proposed in the literature which did not prove to be successful due to large video sequences which need to be processed in surveillance systems. The problem exacerbates in the presence of multi-view cameras. Recently, the development of deep learning (DL)-based systems has shown significant success for HAR even for multi-view camera systems. In this research work, a DL-based design is proposed for HAR. The proposed design consists of multiple steps including feature mapping, feature fusion and feature selection. For the initial feature mapping step, two pre-trained models are considered, such as DenseNet201 and InceptionV3. Later, the extracted deep features are fused using the Serial based Extended (SbE) approach. Later on, the best features are selected using Kurtosis-controlled Weighted KNN. The selected features are classified using several supervised learning algorithms. To show the efficacy of the proposed design, we used several datasets, such as KTH, IXMAS, WVU, and Hollywood. Experimental results showed that the proposed design achieved accuracies of 99.3%, 97.4%, 99.8%, and 99.9%, respectively, on these datasets. Furthermore, the feature selection step performed better in terms of computational time compared with the state-of-the-art. MDPI 2021-11-28 /pmc/articles/PMC8659437/ /pubmed/34883944 http://dx.doi.org/10.3390/s21237941 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Khan, Seemab
Khan, Muhammad Attique
Alhaisoni, Majed
Tariq, Usman
Yong, Hwan-Seung
Armghan, Ammar
Alenezi, Fayadh
Human Action Recognition: A Paradigm of Best Deep Learning Features Selection and Serial Based Extended Fusion
title Human Action Recognition: A Paradigm of Best Deep Learning Features Selection and Serial Based Extended Fusion
title_full Human Action Recognition: A Paradigm of Best Deep Learning Features Selection and Serial Based Extended Fusion
title_fullStr Human Action Recognition: A Paradigm of Best Deep Learning Features Selection and Serial Based Extended Fusion
title_full_unstemmed Human Action Recognition: A Paradigm of Best Deep Learning Features Selection and Serial Based Extended Fusion
title_short Human Action Recognition: A Paradigm of Best Deep Learning Features Selection and Serial Based Extended Fusion
title_sort human action recognition: a paradigm of best deep learning features selection and serial based extended fusion
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8659437/
https://www.ncbi.nlm.nih.gov/pubmed/34883944
http://dx.doi.org/10.3390/s21237941
work_keys_str_mv AT khanseemab humanactionrecognitionaparadigmofbestdeeplearningfeaturesselectionandserialbasedextendedfusion
AT khanmuhammadattique humanactionrecognitionaparadigmofbestdeeplearningfeaturesselectionandserialbasedextendedfusion
AT alhaisonimajed humanactionrecognitionaparadigmofbestdeeplearningfeaturesselectionandserialbasedextendedfusion
AT tariqusman humanactionrecognitionaparadigmofbestdeeplearningfeaturesselectionandserialbasedextendedfusion
AT yonghwanseung humanactionrecognitionaparadigmofbestdeeplearningfeaturesselectionandserialbasedextendedfusion
AT armghanammar humanactionrecognitionaparadigmofbestdeeplearningfeaturesselectionandserialbasedextendedfusion
AT alenezifayadh humanactionrecognitionaparadigmofbestdeeplearningfeaturesselectionandserialbasedextendedfusion