Cargando…

Occlusion facial expression recognition based on feature fusion residual attention network

Recognizing occluded facial expressions in the wild poses a significant challenge. However, most previous approaches rely solely on either global or local feature-based methods, leading to the loss of relevant expression features. To address these issues, a feature fusion residual attention network...

Descripción completa

Detalles Bibliográficos
Autores principales: Chen, Yuekun, Liu, Shuaishi, Zhao, Dongxu, Ji, Wenkai
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10472272/
https://www.ncbi.nlm.nih.gov/pubmed/37663762
http://dx.doi.org/10.3389/fnbot.2023.1250706
_version_ 1785100039306084352
author Chen, Yuekun
Liu, Shuaishi
Zhao, Dongxu
Ji, Wenkai
author_facet Chen, Yuekun
Liu, Shuaishi
Zhao, Dongxu
Ji, Wenkai
author_sort Chen, Yuekun
collection PubMed
description Recognizing occluded facial expressions in the wild poses a significant challenge. However, most previous approaches rely solely on either global or local feature-based methods, leading to the loss of relevant expression features. To address these issues, a feature fusion residual attention network (FFRA-Net) is proposed. FFRA-Net consists of a multi-scale module, a local attention module, and a feature fusion module. The multi-scale module divides the intermediate feature map into several sub-feature maps in an equal manner along the channel dimension. Then, a convolution operation is applied to each of these feature maps to obtain diverse global features. The local attention module divides the intermediate feature map into several sub-feature maps along the spatial dimension. Subsequently, a convolution operation is applied to each of these feature maps, resulting in the extraction of local key features through the attention mechanism. The feature fusion module plays a crucial role in integrating global and local expression features while also establishing residual links between inputs and outputs to compensate for the loss of fine-grained features. Last, two occlusion expression datasets (FM_RAF-DB and SG_RAF-DB) were constructed based on the RAF-DB dataset. Extensive experiments demonstrate that the proposed FFRA-Net achieves excellent results on four datasets: FM_RAF-DB, SG_RAF-DB, RAF-DB, and FERPLUS, with accuracies of 77.87%, 79.50%, 88.66%, and 88.97%, respectively. Thus, the approach presented in this paper demonstrates strong applicability in the context of occluded facial expression recognition (FER).
format Online
Article
Text
id pubmed-10472272
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-104722722023-09-02 Occlusion facial expression recognition based on feature fusion residual attention network Chen, Yuekun Liu, Shuaishi Zhao, Dongxu Ji, Wenkai Front Neurorobot Neuroscience Recognizing occluded facial expressions in the wild poses a significant challenge. However, most previous approaches rely solely on either global or local feature-based methods, leading to the loss of relevant expression features. To address these issues, a feature fusion residual attention network (FFRA-Net) is proposed. FFRA-Net consists of a multi-scale module, a local attention module, and a feature fusion module. The multi-scale module divides the intermediate feature map into several sub-feature maps in an equal manner along the channel dimension. Then, a convolution operation is applied to each of these feature maps to obtain diverse global features. The local attention module divides the intermediate feature map into several sub-feature maps along the spatial dimension. Subsequently, a convolution operation is applied to each of these feature maps, resulting in the extraction of local key features through the attention mechanism. The feature fusion module plays a crucial role in integrating global and local expression features while also establishing residual links between inputs and outputs to compensate for the loss of fine-grained features. Last, two occlusion expression datasets (FM_RAF-DB and SG_RAF-DB) were constructed based on the RAF-DB dataset. Extensive experiments demonstrate that the proposed FFRA-Net achieves excellent results on four datasets: FM_RAF-DB, SG_RAF-DB, RAF-DB, and FERPLUS, with accuracies of 77.87%, 79.50%, 88.66%, and 88.97%, respectively. Thus, the approach presented in this paper demonstrates strong applicability in the context of occluded facial expression recognition (FER). Frontiers Media S.A. 2023-08-17 /pmc/articles/PMC10472272/ /pubmed/37663762 http://dx.doi.org/10.3389/fnbot.2023.1250706 Text en Copyright © 2023 Chen, Liu, Zhao and Ji. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Chen, Yuekun
Liu, Shuaishi
Zhao, Dongxu
Ji, Wenkai
Occlusion facial expression recognition based on feature fusion residual attention network
title Occlusion facial expression recognition based on feature fusion residual attention network
title_full Occlusion facial expression recognition based on feature fusion residual attention network
title_fullStr Occlusion facial expression recognition based on feature fusion residual attention network
title_full_unstemmed Occlusion facial expression recognition based on feature fusion residual attention network
title_short Occlusion facial expression recognition based on feature fusion residual attention network
title_sort occlusion facial expression recognition based on feature fusion residual attention network
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10472272/
https://www.ncbi.nlm.nih.gov/pubmed/37663762
http://dx.doi.org/10.3389/fnbot.2023.1250706
work_keys_str_mv AT chenyuekun occlusionfacialexpressionrecognitionbasedonfeaturefusionresidualattentionnetwork
AT liushuaishi occlusionfacialexpressionrecognitionbasedonfeaturefusionresidualattentionnetwork
AT zhaodongxu occlusionfacialexpressionrecognitionbasedonfeaturefusionresidualattentionnetwork
AT jiwenkai occlusionfacialexpressionrecognitionbasedonfeaturefusionresidualattentionnetwork