Cargando…

Real-time mental stress detection using multimodality expressions with a deep learning framework

Mental stress is becoming increasingly widespread and gradually severe in modern society, threatening people’s physical and mental health. To avoid the adverse effects of stress on people, it is imperative to detect stress in time. Many studies have demonstrated the effectiveness of using objective...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhang, Jing, Yin, Hang, Zhang, Jiayu, Yang, Gang, Qin, Jing, He, Ling
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9389269/
https://www.ncbi.nlm.nih.gov/pubmed/35992909
http://dx.doi.org/10.3389/fnins.2022.947168
_version_ 1784770406829260800
author Zhang, Jing
Yin, Hang
Zhang, Jiayu
Yang, Gang
Qin, Jing
He, Ling
author_facet Zhang, Jing
Yin, Hang
Zhang, Jiayu
Yang, Gang
Qin, Jing
He, Ling
author_sort Zhang, Jing
collection PubMed
description Mental stress is becoming increasingly widespread and gradually severe in modern society, threatening people’s physical and mental health. To avoid the adverse effects of stress on people, it is imperative to detect stress in time. Many studies have demonstrated the effectiveness of using objective indicators to detect stress. Over the past few years, a growing number of researchers have been trying to use deep learning technology to detect stress. However, these works usually use single-modality for stress detection and rarely combine stress-related information from multimodality. In this paper, a real-time deep learning framework is proposed to fuse ECG, voice, and facial expressions for acute stress detection. The framework extracts the stress-related information of the corresponding input through ResNet50 and I3D with the temporal attention module (TAM), where TAM can highlight the distinguishing temporal representation for facial expressions about stress. The matrix eigenvector-based approach is then used to fuse the multimodality information about stress. To validate the effectiveness of the framework, a well-established psychological experiment, the Montreal imaging stress task (MIST), was applied in this work. We collected multimodality data from 20 participants during MIST. The results demonstrate that the framework can combine stress-related information from multimodality to achieve 85.1% accuracy in distinguishing acute stress. It can serve as a tool for computer-aided stress detection.
format Online
Article
Text
id pubmed-9389269
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-93892692022-08-20 Real-time mental stress detection using multimodality expressions with a deep learning framework Zhang, Jing Yin, Hang Zhang, Jiayu Yang, Gang Qin, Jing He, Ling Front Neurosci Neuroscience Mental stress is becoming increasingly widespread and gradually severe in modern society, threatening people’s physical and mental health. To avoid the adverse effects of stress on people, it is imperative to detect stress in time. Many studies have demonstrated the effectiveness of using objective indicators to detect stress. Over the past few years, a growing number of researchers have been trying to use deep learning technology to detect stress. However, these works usually use single-modality for stress detection and rarely combine stress-related information from multimodality. In this paper, a real-time deep learning framework is proposed to fuse ECG, voice, and facial expressions for acute stress detection. The framework extracts the stress-related information of the corresponding input through ResNet50 and I3D with the temporal attention module (TAM), where TAM can highlight the distinguishing temporal representation for facial expressions about stress. The matrix eigenvector-based approach is then used to fuse the multimodality information about stress. To validate the effectiveness of the framework, a well-established psychological experiment, the Montreal imaging stress task (MIST), was applied in this work. We collected multimodality data from 20 participants during MIST. The results demonstrate that the framework can combine stress-related information from multimodality to achieve 85.1% accuracy in distinguishing acute stress. It can serve as a tool for computer-aided stress detection. Frontiers Media S.A. 2022-08-05 /pmc/articles/PMC9389269/ /pubmed/35992909 http://dx.doi.org/10.3389/fnins.2022.947168 Text en Copyright © 2022 Zhang, Yin, Zhang, Yang, Qin and He. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Zhang, Jing
Yin, Hang
Zhang, Jiayu
Yang, Gang
Qin, Jing
He, Ling
Real-time mental stress detection using multimodality expressions with a deep learning framework
title Real-time mental stress detection using multimodality expressions with a deep learning framework
title_full Real-time mental stress detection using multimodality expressions with a deep learning framework
title_fullStr Real-time mental stress detection using multimodality expressions with a deep learning framework
title_full_unstemmed Real-time mental stress detection using multimodality expressions with a deep learning framework
title_short Real-time mental stress detection using multimodality expressions with a deep learning framework
title_sort real-time mental stress detection using multimodality expressions with a deep learning framework
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9389269/
https://www.ncbi.nlm.nih.gov/pubmed/35992909
http://dx.doi.org/10.3389/fnins.2022.947168
work_keys_str_mv AT zhangjing realtimementalstressdetectionusingmultimodalityexpressionswithadeeplearningframework
AT yinhang realtimementalstressdetectionusingmultimodalityexpressionswithadeeplearningframework
AT zhangjiayu realtimementalstressdetectionusingmultimodalityexpressionswithadeeplearningframework
AT yanggang realtimementalstressdetectionusingmultimodalityexpressionswithadeeplearningframework
AT qinjing realtimementalstressdetectionusingmultimodalityexpressionswithadeeplearningframework
AT heling realtimementalstressdetectionusingmultimodalityexpressionswithadeeplearningframework