Cargando…

Four-layer ConvNet to facial emotion recognition with minimal epochs and the significance of data diversity

Emotion recognition is defined as identifying human emotion and is directly related to different fields such as human–computer interfaces, human emotional processing, irrational analysis, medical diagnostics, data-driven animation, human–robot communication, and many more. This paper proposes a new...

Descripción completa

Detalles Bibliográficos
Autores principales: Debnath, Tanoy, Reza, Md. Mahfuz, Rahman, Anichur, Beheshti, Amin, Band, Shahab S., Alinejad-Rokny, Hamid
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9050748/
https://www.ncbi.nlm.nih.gov/pubmed/35484318
http://dx.doi.org/10.1038/s41598-022-11173-0
_version_ 1784696438193651712
author Debnath, Tanoy
Reza, Md. Mahfuz
Rahman, Anichur
Beheshti, Amin
Band, Shahab S.
Alinejad-Rokny, Hamid
author_facet Debnath, Tanoy
Reza, Md. Mahfuz
Rahman, Anichur
Beheshti, Amin
Band, Shahab S.
Alinejad-Rokny, Hamid
author_sort Debnath, Tanoy
collection PubMed
description Emotion recognition is defined as identifying human emotion and is directly related to different fields such as human–computer interfaces, human emotional processing, irrational analysis, medical diagnostics, data-driven animation, human–robot communication, and many more. This paper proposes a new facial emotional recognition model using a convolutional neural network. Our proposed model, “ConvNet”, detects seven specific emotions from image data including anger, disgust, fear, happiness, neutrality, sadness, and surprise. The features extracted by the Local Binary Pattern (LBP), region based Oriented FAST and rotated BRIEF (ORB) and Convolutional Neural network (CNN) from facial expressions images were fused to develop the classification model through training by our proposed CNN model (ConvNet). Our method can converge quickly and achieves good performance which the authors can develop a real-time schema that can easily fit the model and sense emotions. Furthermore, this study focuses on the mental or emotional stuff of a man or woman using the behavioral aspects. To complete the training of the CNN network model, we use the FER2013 databases at first, and then apply the generalization techniques to the JAFFE and CK+ datasets respectively in the testing stage to evaluate the performance of the model. In the generalization approach on the JAFFE dataset, we get a 92.05% accuracy, while on the CK+ dataset, we acquire a 98.13% accuracy which achieve the best performance among existing methods. We also test the system’s success by identifying facial expressions in real-time. ConvNet consists of four layers of convolution together with two fully connected layers. The experimental results show that the ConvNet is able to achieve 96% training accuracy which is much better than current existing models. However, when compared to other validation methods, the suggested technique was more accurate. ConvNet also achieved validation accuracy of 91.01% for the FER2013 dataset. We also made all the materials publicly accessible for the research community at: https://github.com/Tanoy004/Emotion-recognition-through-CNN.
format Online
Article
Text
id pubmed-9050748
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-90507482022-04-30 Four-layer ConvNet to facial emotion recognition with minimal epochs and the significance of data diversity Debnath, Tanoy Reza, Md. Mahfuz Rahman, Anichur Beheshti, Amin Band, Shahab S. Alinejad-Rokny, Hamid Sci Rep Article Emotion recognition is defined as identifying human emotion and is directly related to different fields such as human–computer interfaces, human emotional processing, irrational analysis, medical diagnostics, data-driven animation, human–robot communication, and many more. This paper proposes a new facial emotional recognition model using a convolutional neural network. Our proposed model, “ConvNet”, detects seven specific emotions from image data including anger, disgust, fear, happiness, neutrality, sadness, and surprise. The features extracted by the Local Binary Pattern (LBP), region based Oriented FAST and rotated BRIEF (ORB) and Convolutional Neural network (CNN) from facial expressions images were fused to develop the classification model through training by our proposed CNN model (ConvNet). Our method can converge quickly and achieves good performance which the authors can develop a real-time schema that can easily fit the model and sense emotions. Furthermore, this study focuses on the mental or emotional stuff of a man or woman using the behavioral aspects. To complete the training of the CNN network model, we use the FER2013 databases at first, and then apply the generalization techniques to the JAFFE and CK+ datasets respectively in the testing stage to evaluate the performance of the model. In the generalization approach on the JAFFE dataset, we get a 92.05% accuracy, while on the CK+ dataset, we acquire a 98.13% accuracy which achieve the best performance among existing methods. We also test the system’s success by identifying facial expressions in real-time. ConvNet consists of four layers of convolution together with two fully connected layers. The experimental results show that the ConvNet is able to achieve 96% training accuracy which is much better than current existing models. However, when compared to other validation methods, the suggested technique was more accurate. ConvNet also achieved validation accuracy of 91.01% for the FER2013 dataset. We also made all the materials publicly accessible for the research community at: https://github.com/Tanoy004/Emotion-recognition-through-CNN. Nature Publishing Group UK 2022-04-28 /pmc/articles/PMC9050748/ /pubmed/35484318 http://dx.doi.org/10.1038/s41598-022-11173-0 Text en © Crown 2022 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Debnath, Tanoy
Reza, Md. Mahfuz
Rahman, Anichur
Beheshti, Amin
Band, Shahab S.
Alinejad-Rokny, Hamid
Four-layer ConvNet to facial emotion recognition with minimal epochs and the significance of data diversity
title Four-layer ConvNet to facial emotion recognition with minimal epochs and the significance of data diversity
title_full Four-layer ConvNet to facial emotion recognition with minimal epochs and the significance of data diversity
title_fullStr Four-layer ConvNet to facial emotion recognition with minimal epochs and the significance of data diversity
title_full_unstemmed Four-layer ConvNet to facial emotion recognition with minimal epochs and the significance of data diversity
title_short Four-layer ConvNet to facial emotion recognition with minimal epochs and the significance of data diversity
title_sort four-layer convnet to facial emotion recognition with minimal epochs and the significance of data diversity
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9050748/
https://www.ncbi.nlm.nih.gov/pubmed/35484318
http://dx.doi.org/10.1038/s41598-022-11173-0
work_keys_str_mv AT debnathtanoy fourlayerconvnettofacialemotionrecognitionwithminimalepochsandthesignificanceofdatadiversity
AT rezamdmahfuz fourlayerconvnettofacialemotionrecognitionwithminimalepochsandthesignificanceofdatadiversity
AT rahmananichur fourlayerconvnettofacialemotionrecognitionwithminimalepochsandthesignificanceofdatadiversity
AT beheshtiamin fourlayerconvnettofacialemotionrecognitionwithminimalepochsandthesignificanceofdatadiversity
AT bandshahabs fourlayerconvnettofacialemotionrecognitionwithminimalepochsandthesignificanceofdatadiversity
AT alinejadroknyhamid fourlayerconvnettofacialemotionrecognitionwithminimalepochsandthesignificanceofdatadiversity