Cargando…

Investigating the effect of multimodality and sentiments on speaking assessments: a facial emotional analysis

This quasi-experimental study aimed to determine the relationship between (i) oral language ability and emotions represented by facial emotions, and (ii) modality of assessment (audios versus videos) and sentiments embedded in each modality. Sixty university students watched and/or listened to four...

Descripción completa

Detalles Bibliográficos
Autores principales: Chong, Joey Jia Qi, Aryadoust, Vahid
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9713747/
https://www.ncbi.nlm.nih.gov/pubmed/36471775
http://dx.doi.org/10.1007/s10639-022-11478-7
_version_ 1784842077653172224
author Chong, Joey Jia Qi
Aryadoust, Vahid
author_facet Chong, Joey Jia Qi
Aryadoust, Vahid
author_sort Chong, Joey Jia Qi
collection PubMed
description This quasi-experimental study aimed to determine the relationship between (i) oral language ability and emotions represented by facial emotions, and (ii) modality of assessment (audios versus videos) and sentiments embedded in each modality. Sixty university students watched and/or listened to four selected audio-visual stimuli and orally answered follow-up comprehension questions. One stimulus was designed to evoke happiness while the other, sadness. Participants’ facial emotions during the answering were measured using the FaceReader technology. In addition, four trained raters assessed the responses of the participants. An analysis of the FaceReader data showed that there were significant main and interaction effects of sentiment and modality on participants’ facial emotional expression. Notably, there was a significant difference in the amount of facial emotions evoked by (i) the happy vs. sad sentiment videos and (ii) video vs. audio modalities. In contrast, sentiments embedded in the stimuli and modalities had no significant effect on the measured speaking performance of the participants. Nevertheless, we found a number of significant correlations between the participants’ test scores and some of their facial emotions evoked by the stimuli. Implications of these findings for the assessment of oral communication are discussed.
format Online
Article
Text
id pubmed-9713747
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Springer US
record_format MEDLINE/PubMed
spelling pubmed-97137472022-12-01 Investigating the effect of multimodality and sentiments on speaking assessments: a facial emotional analysis Chong, Joey Jia Qi Aryadoust, Vahid Educ Inf Technol (Dordr) Article This quasi-experimental study aimed to determine the relationship between (i) oral language ability and emotions represented by facial emotions, and (ii) modality of assessment (audios versus videos) and sentiments embedded in each modality. Sixty university students watched and/or listened to four selected audio-visual stimuli and orally answered follow-up comprehension questions. One stimulus was designed to evoke happiness while the other, sadness. Participants’ facial emotions during the answering were measured using the FaceReader technology. In addition, four trained raters assessed the responses of the participants. An analysis of the FaceReader data showed that there were significant main and interaction effects of sentiment and modality on participants’ facial emotional expression. Notably, there was a significant difference in the amount of facial emotions evoked by (i) the happy vs. sad sentiment videos and (ii) video vs. audio modalities. In contrast, sentiments embedded in the stimuli and modalities had no significant effect on the measured speaking performance of the participants. Nevertheless, we found a number of significant correlations between the participants’ test scores and some of their facial emotions evoked by the stimuli. Implications of these findings for the assessment of oral communication are discussed. Springer US 2022-12-01 2023 /pmc/articles/PMC9713747/ /pubmed/36471775 http://dx.doi.org/10.1007/s10639-022-11478-7 Text en © This is a U.S. Government work and not under copyright protection in the US; foreign copyright protection may apply 2022 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Article
Chong, Joey Jia Qi
Aryadoust, Vahid
Investigating the effect of multimodality and sentiments on speaking assessments: a facial emotional analysis
title Investigating the effect of multimodality and sentiments on speaking assessments: a facial emotional analysis
title_full Investigating the effect of multimodality and sentiments on speaking assessments: a facial emotional analysis
title_fullStr Investigating the effect of multimodality and sentiments on speaking assessments: a facial emotional analysis
title_full_unstemmed Investigating the effect of multimodality and sentiments on speaking assessments: a facial emotional analysis
title_short Investigating the effect of multimodality and sentiments on speaking assessments: a facial emotional analysis
title_sort investigating the effect of multimodality and sentiments on speaking assessments: a facial emotional analysis
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9713747/
https://www.ncbi.nlm.nih.gov/pubmed/36471775
http://dx.doi.org/10.1007/s10639-022-11478-7
work_keys_str_mv AT chongjoeyjiaqi investigatingtheeffectofmultimodalityandsentimentsonspeakingassessmentsafacialemotionalanalysis
AT aryadoustvahid investigatingtheeffectofmultimodalityandsentimentsonspeakingassessmentsafacialemotionalanalysis