Cargando…

Masked Face Emotion Recognition Based on Facial Landmarks and Deep Learning Approaches for Visually Impaired People

Current artificial intelligence systems for determining a person’s emotions rely heavily on lip and mouth movement and other facial features such as eyebrows, eyes, and the forehead. Furthermore, low-light images are typically classified incorrectly because of the dark region around the eyes and eye...

Descripción completa

Detalles Bibliográficos
Autores principales: Mukhiddinov, Mukhriddin, Djuraev, Oybek, Akhmedov, Farkhod, Mukhamadiyev, Abdinabi, Cho, Jinsoo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9921901/
https://www.ncbi.nlm.nih.gov/pubmed/36772117
http://dx.doi.org/10.3390/s23031080
_version_ 1784887423204851712
author Mukhiddinov, Mukhriddin
Djuraev, Oybek
Akhmedov, Farkhod
Mukhamadiyev, Abdinabi
Cho, Jinsoo
author_facet Mukhiddinov, Mukhriddin
Djuraev, Oybek
Akhmedov, Farkhod
Mukhamadiyev, Abdinabi
Cho, Jinsoo
author_sort Mukhiddinov, Mukhriddin
collection PubMed
description Current artificial intelligence systems for determining a person’s emotions rely heavily on lip and mouth movement and other facial features such as eyebrows, eyes, and the forehead. Furthermore, low-light images are typically classified incorrectly because of the dark region around the eyes and eyebrows. In this work, we propose a facial emotion recognition method for masked facial images using low-light image enhancement and feature analysis of the upper features of the face with a convolutional neural network. The proposed approach employs the AffectNet image dataset, which includes eight types of facial expressions and 420,299 images. Initially, the facial input image’s lower parts are covered behind a synthetic mask. Boundary and regional representation methods are used to indicate the head and upper features of the face. Secondly, we effectively adopt a facial landmark detection method-based feature extraction strategy using the partially covered masked face’s features. Finally, the features, the coordinates of the landmarks that have been identified, and the histograms of the oriented gradients are then incorporated into the classification procedure using a convolutional neural network. An experimental evaluation shows that the proposed method surpasses others by achieving an accuracy of 69.3% on the AffectNet dataset.
format Online
Article
Text
id pubmed-9921901
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-99219012023-02-12 Masked Face Emotion Recognition Based on Facial Landmarks and Deep Learning Approaches for Visually Impaired People Mukhiddinov, Mukhriddin Djuraev, Oybek Akhmedov, Farkhod Mukhamadiyev, Abdinabi Cho, Jinsoo Sensors (Basel) Article Current artificial intelligence systems for determining a person’s emotions rely heavily on lip and mouth movement and other facial features such as eyebrows, eyes, and the forehead. Furthermore, low-light images are typically classified incorrectly because of the dark region around the eyes and eyebrows. In this work, we propose a facial emotion recognition method for masked facial images using low-light image enhancement and feature analysis of the upper features of the face with a convolutional neural network. The proposed approach employs the AffectNet image dataset, which includes eight types of facial expressions and 420,299 images. Initially, the facial input image’s lower parts are covered behind a synthetic mask. Boundary and regional representation methods are used to indicate the head and upper features of the face. Secondly, we effectively adopt a facial landmark detection method-based feature extraction strategy using the partially covered masked face’s features. Finally, the features, the coordinates of the landmarks that have been identified, and the histograms of the oriented gradients are then incorporated into the classification procedure using a convolutional neural network. An experimental evaluation shows that the proposed method surpasses others by achieving an accuracy of 69.3% on the AffectNet dataset. MDPI 2023-01-17 /pmc/articles/PMC9921901/ /pubmed/36772117 http://dx.doi.org/10.3390/s23031080 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Mukhiddinov, Mukhriddin
Djuraev, Oybek
Akhmedov, Farkhod
Mukhamadiyev, Abdinabi
Cho, Jinsoo
Masked Face Emotion Recognition Based on Facial Landmarks and Deep Learning Approaches for Visually Impaired People
title Masked Face Emotion Recognition Based on Facial Landmarks and Deep Learning Approaches for Visually Impaired People
title_full Masked Face Emotion Recognition Based on Facial Landmarks and Deep Learning Approaches for Visually Impaired People
title_fullStr Masked Face Emotion Recognition Based on Facial Landmarks and Deep Learning Approaches for Visually Impaired People
title_full_unstemmed Masked Face Emotion Recognition Based on Facial Landmarks and Deep Learning Approaches for Visually Impaired People
title_short Masked Face Emotion Recognition Based on Facial Landmarks and Deep Learning Approaches for Visually Impaired People
title_sort masked face emotion recognition based on facial landmarks and deep learning approaches for visually impaired people
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9921901/
https://www.ncbi.nlm.nih.gov/pubmed/36772117
http://dx.doi.org/10.3390/s23031080
work_keys_str_mv AT mukhiddinovmukhriddin maskedfaceemotionrecognitionbasedonfaciallandmarksanddeeplearningapproachesforvisuallyimpairedpeople
AT djuraevoybek maskedfaceemotionrecognitionbasedonfaciallandmarksanddeeplearningapproachesforvisuallyimpairedpeople
AT akhmedovfarkhod maskedfaceemotionrecognitionbasedonfaciallandmarksanddeeplearningapproachesforvisuallyimpairedpeople
AT mukhamadiyevabdinabi maskedfaceemotionrecognitionbasedonfaciallandmarksanddeeplearningapproachesforvisuallyimpairedpeople
AT chojinsoo maskedfaceemotionrecognitionbasedonfaciallandmarksanddeeplearningapproachesforvisuallyimpairedpeople