Cargando…

Robust Human Face Emotion Classification Using Triplet-Loss-Based Deep CNN Features and SVM

Human facial emotion detection is one of the challenging tasks in computer vision. Owing to high inter-class variance, it is hard for machine learning models to predict facial emotions accurately. Moreover, a person with several facial emotions increases the diversity and complexity of classificatio...

Descripción completa

Detalles Bibliográficos
Autores principales: Haider, Irfan, Yang, Hyung-Jeong, Lee, Guee-Sang, Kim, Soo-Hyung
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10223619/
https://www.ncbi.nlm.nih.gov/pubmed/37430689
http://dx.doi.org/10.3390/s23104770
_version_ 1785049984999096320
author Haider, Irfan
Yang, Hyung-Jeong
Lee, Guee-Sang
Kim, Soo-Hyung
author_facet Haider, Irfan
Yang, Hyung-Jeong
Lee, Guee-Sang
Kim, Soo-Hyung
author_sort Haider, Irfan
collection PubMed
description Human facial emotion detection is one of the challenging tasks in computer vision. Owing to high inter-class variance, it is hard for machine learning models to predict facial emotions accurately. Moreover, a person with several facial emotions increases the diversity and complexity of classification problems. In this paper, we have proposed a novel and intelligent approach for the classification of human facial emotions. The proposed approach comprises customized ResNet18 by employing transfer learning with the integration of triplet loss function (TLF), followed by SVM classification model. Using deep features from a customized ResNet18 trained with triplet loss, the proposed pipeline consists of a face detector used to locate and refine the face bounding box and a classifier to identify the facial expression class of discovered faces. RetinaFace is used to extract the identified face areas from the source image, and a ResNet18 model is trained on cropped face images with triplet loss to retrieve those features. An SVM classifier is used to categorize the facial expression based on the acquired deep characteristics. In this paper, we have proposed a method that can achieve better performance than state-of-the-art (SoTA) methods on JAFFE and MMI datasets. The technique is based on the triplet loss function to generate deep input image features. The proposed method performed well on the JAFFE and MMI datasets with an accuracy of 98.44% and 99.02%, respectively, on seven emotions; meanwhile, the performance of the method needs to be fine-tuned for the FER2013 and AFFECTNET datasets.
format Online
Article
Text
id pubmed-10223619
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-102236192023-05-28 Robust Human Face Emotion Classification Using Triplet-Loss-Based Deep CNN Features and SVM Haider, Irfan Yang, Hyung-Jeong Lee, Guee-Sang Kim, Soo-Hyung Sensors (Basel) Article Human facial emotion detection is one of the challenging tasks in computer vision. Owing to high inter-class variance, it is hard for machine learning models to predict facial emotions accurately. Moreover, a person with several facial emotions increases the diversity and complexity of classification problems. In this paper, we have proposed a novel and intelligent approach for the classification of human facial emotions. The proposed approach comprises customized ResNet18 by employing transfer learning with the integration of triplet loss function (TLF), followed by SVM classification model. Using deep features from a customized ResNet18 trained with triplet loss, the proposed pipeline consists of a face detector used to locate and refine the face bounding box and a classifier to identify the facial expression class of discovered faces. RetinaFace is used to extract the identified face areas from the source image, and a ResNet18 model is trained on cropped face images with triplet loss to retrieve those features. An SVM classifier is used to categorize the facial expression based on the acquired deep characteristics. In this paper, we have proposed a method that can achieve better performance than state-of-the-art (SoTA) methods on JAFFE and MMI datasets. The technique is based on the triplet loss function to generate deep input image features. The proposed method performed well on the JAFFE and MMI datasets with an accuracy of 98.44% and 99.02%, respectively, on seven emotions; meanwhile, the performance of the method needs to be fine-tuned for the FER2013 and AFFECTNET datasets. MDPI 2023-05-15 /pmc/articles/PMC10223619/ /pubmed/37430689 http://dx.doi.org/10.3390/s23104770 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Haider, Irfan
Yang, Hyung-Jeong
Lee, Guee-Sang
Kim, Soo-Hyung
Robust Human Face Emotion Classification Using Triplet-Loss-Based Deep CNN Features and SVM
title Robust Human Face Emotion Classification Using Triplet-Loss-Based Deep CNN Features and SVM
title_full Robust Human Face Emotion Classification Using Triplet-Loss-Based Deep CNN Features and SVM
title_fullStr Robust Human Face Emotion Classification Using Triplet-Loss-Based Deep CNN Features and SVM
title_full_unstemmed Robust Human Face Emotion Classification Using Triplet-Loss-Based Deep CNN Features and SVM
title_short Robust Human Face Emotion Classification Using Triplet-Loss-Based Deep CNN Features and SVM
title_sort robust human face emotion classification using triplet-loss-based deep cnn features and svm
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10223619/
https://www.ncbi.nlm.nih.gov/pubmed/37430689
http://dx.doi.org/10.3390/s23104770
work_keys_str_mv AT haiderirfan robusthumanfaceemotionclassificationusingtripletlossbaseddeepcnnfeaturesandsvm
AT yanghyungjeong robusthumanfaceemotionclassificationusingtripletlossbaseddeepcnnfeaturesandsvm
AT leegueesang robusthumanfaceemotionclassificationusingtripletlossbaseddeepcnnfeaturesandsvm
AT kimsoohyung robusthumanfaceemotionclassificationusingtripletlossbaseddeepcnnfeaturesandsvm