Cargando…

Research on Classroom Emotion Recognition Algorithm Based on Visual Emotion Classification

In this paper, we construct a classroom emotion recognition algorithm by classifying visual emotions for improving the quality of classroom teaching. We assign weights to the training images through an attention mechanism network and then add a designed loss function so that it can focus on the feat...

Descripción completa

Detalles Bibliográficos
Autor principal: Yuan, Qinying
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9377850/
https://www.ncbi.nlm.nih.gov/pubmed/35978909
http://dx.doi.org/10.1155/2022/6453499
_version_ 1784768418977677312
author Yuan, Qinying
author_facet Yuan, Qinying
author_sort Yuan, Qinying
collection PubMed
description In this paper, we construct a classroom emotion recognition algorithm by classifying visual emotions for improving the quality of classroom teaching. We assign weights to the training images through an attention mechanism network and then add a designed loss function so that it can focus on the feature parts of face images that are not obscured and can characterize the target emotion, thus improving the accuracy of facial emotion recognition under obscuration. Analyze the salient expression features of classroom students and establish a classification criteria and criteria library. The videos of classroom students' facial expressions are collected, a multi-task convolutional neural network (MTCNN) is used for face detection and image segmentation, and the ones with better feature morphology are selected to build a standard database. A visual motion analysis method with the fusion of overall and local features of the image is proposed. To validate the effectiveness of the designed MTCNN model, two mainstream classification networks, VGG16 and ResNet18, were tested and compared with MTCNN by training on RAF-DB, masked dataset, and the classroom dataset constructed in this paper, and the final accuracy after training was 78.26% and 75.03% for ResNet18 and VGG16, respectively. The results show that the MTCNN proposed in this paper has a better recognition effect. The test results of the loss function also show that it can effectively improve the recognition accuracy, and the MTCNN model has an accuracy of 93.53% for recognizing students' facial emotions. Finally, the dataset is extended with the training method of expression features, and the experimental study shows that the method performs well and can carry out recognition effectively.
format Online
Article
Text
id pubmed-9377850
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Hindawi
record_format MEDLINE/PubMed
spelling pubmed-93778502022-08-16 Research on Classroom Emotion Recognition Algorithm Based on Visual Emotion Classification Yuan, Qinying Comput Intell Neurosci Research Article In this paper, we construct a classroom emotion recognition algorithm by classifying visual emotions for improving the quality of classroom teaching. We assign weights to the training images through an attention mechanism network and then add a designed loss function so that it can focus on the feature parts of face images that are not obscured and can characterize the target emotion, thus improving the accuracy of facial emotion recognition under obscuration. Analyze the salient expression features of classroom students and establish a classification criteria and criteria library. The videos of classroom students' facial expressions are collected, a multi-task convolutional neural network (MTCNN) is used for face detection and image segmentation, and the ones with better feature morphology are selected to build a standard database. A visual motion analysis method with the fusion of overall and local features of the image is proposed. To validate the effectiveness of the designed MTCNN model, two mainstream classification networks, VGG16 and ResNet18, were tested and compared with MTCNN by training on RAF-DB, masked dataset, and the classroom dataset constructed in this paper, and the final accuracy after training was 78.26% and 75.03% for ResNet18 and VGG16, respectively. The results show that the MTCNN proposed in this paper has a better recognition effect. The test results of the loss function also show that it can effectively improve the recognition accuracy, and the MTCNN model has an accuracy of 93.53% for recognizing students' facial emotions. Finally, the dataset is extended with the training method of expression features, and the experimental study shows that the method performs well and can carry out recognition effectively. Hindawi 2022-08-08 /pmc/articles/PMC9377850/ /pubmed/35978909 http://dx.doi.org/10.1155/2022/6453499 Text en Copyright © 2022 Qinying Yuan. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Research Article
Yuan, Qinying
Research on Classroom Emotion Recognition Algorithm Based on Visual Emotion Classification
title Research on Classroom Emotion Recognition Algorithm Based on Visual Emotion Classification
title_full Research on Classroom Emotion Recognition Algorithm Based on Visual Emotion Classification
title_fullStr Research on Classroom Emotion Recognition Algorithm Based on Visual Emotion Classification
title_full_unstemmed Research on Classroom Emotion Recognition Algorithm Based on Visual Emotion Classification
title_short Research on Classroom Emotion Recognition Algorithm Based on Visual Emotion Classification
title_sort research on classroom emotion recognition algorithm based on visual emotion classification
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9377850/
https://www.ncbi.nlm.nih.gov/pubmed/35978909
http://dx.doi.org/10.1155/2022/6453499
work_keys_str_mv AT yuanqinying researchonclassroomemotionrecognitionalgorithmbasedonvisualemotionclassification