Cargando…

Classification of Depression and Its Severity Based on Multiple Audio Features Using a Graphical Convolutional Neural Network

Audio features are physical features that reflect single or complex coordinated movements in the vocal organs. Hence, in speech-based automatic depression classification, it is critical to consider the relationship among audio features. Here, we propose a deep learning-based classification model for...

Descripción completa

Detalles Bibliográficos
Autores principales: Ishimaru, Momoko, Okada, Yoshifumi, Uchiyama, Ryunosuke, Horiguchi, Ryo, Toyoshima, Itsuki
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9864471/
https://www.ncbi.nlm.nih.gov/pubmed/36674342
http://dx.doi.org/10.3390/ijerph20021588
_version_ 1784875593052979200
author Ishimaru, Momoko
Okada, Yoshifumi
Uchiyama, Ryunosuke
Horiguchi, Ryo
Toyoshima, Itsuki
author_facet Ishimaru, Momoko
Okada, Yoshifumi
Uchiyama, Ryunosuke
Horiguchi, Ryo
Toyoshima, Itsuki
author_sort Ishimaru, Momoko
collection PubMed
description Audio features are physical features that reflect single or complex coordinated movements in the vocal organs. Hence, in speech-based automatic depression classification, it is critical to consider the relationship among audio features. Here, we propose a deep learning-based classification model for discriminating depression and its severity using correlation among audio features. This model represents the correlation between audio features as graph structures and learns speech characteristics using a graph convolutional neural network. We conducted classification experiments in which the same subjects were allowed to be included in both the training and test data (Setting 1) and the subjects in the training and test data were completely separated (Setting 2). The results showed that the classification accuracy in Setting 1 significantly outperformed existing state-of-the-art methods, whereas that in Setting 2, which has not been presented in existing studies, was much lower than in Setting 1. We conclude that the proposed model is an effective tool for discriminating recurring patients and their severities, but it is difficult to detect new depressed patients. For practical application of the model, depression-specific speech regions appearing locally rather than the entire speech of depressed patients should be detected and assigned the appropriate class labels.
format Online
Article
Text
id pubmed-9864471
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-98644712023-01-22 Classification of Depression and Its Severity Based on Multiple Audio Features Using a Graphical Convolutional Neural Network Ishimaru, Momoko Okada, Yoshifumi Uchiyama, Ryunosuke Horiguchi, Ryo Toyoshima, Itsuki Int J Environ Res Public Health Article Audio features are physical features that reflect single or complex coordinated movements in the vocal organs. Hence, in speech-based automatic depression classification, it is critical to consider the relationship among audio features. Here, we propose a deep learning-based classification model for discriminating depression and its severity using correlation among audio features. This model represents the correlation between audio features as graph structures and learns speech characteristics using a graph convolutional neural network. We conducted classification experiments in which the same subjects were allowed to be included in both the training and test data (Setting 1) and the subjects in the training and test data were completely separated (Setting 2). The results showed that the classification accuracy in Setting 1 significantly outperformed existing state-of-the-art methods, whereas that in Setting 2, which has not been presented in existing studies, was much lower than in Setting 1. We conclude that the proposed model is an effective tool for discriminating recurring patients and their severities, but it is difficult to detect new depressed patients. For practical application of the model, depression-specific speech regions appearing locally rather than the entire speech of depressed patients should be detected and assigned the appropriate class labels. MDPI 2023-01-15 /pmc/articles/PMC9864471/ /pubmed/36674342 http://dx.doi.org/10.3390/ijerph20021588 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Ishimaru, Momoko
Okada, Yoshifumi
Uchiyama, Ryunosuke
Horiguchi, Ryo
Toyoshima, Itsuki
Classification of Depression and Its Severity Based on Multiple Audio Features Using a Graphical Convolutional Neural Network
title Classification of Depression and Its Severity Based on Multiple Audio Features Using a Graphical Convolutional Neural Network
title_full Classification of Depression and Its Severity Based on Multiple Audio Features Using a Graphical Convolutional Neural Network
title_fullStr Classification of Depression and Its Severity Based on Multiple Audio Features Using a Graphical Convolutional Neural Network
title_full_unstemmed Classification of Depression and Its Severity Based on Multiple Audio Features Using a Graphical Convolutional Neural Network
title_short Classification of Depression and Its Severity Based on Multiple Audio Features Using a Graphical Convolutional Neural Network
title_sort classification of depression and its severity based on multiple audio features using a graphical convolutional neural network
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9864471/
https://www.ncbi.nlm.nih.gov/pubmed/36674342
http://dx.doi.org/10.3390/ijerph20021588
work_keys_str_mv AT ishimarumomoko classificationofdepressionanditsseveritybasedonmultipleaudiofeaturesusingagraphicalconvolutionalneuralnetwork
AT okadayoshifumi classificationofdepressionanditsseveritybasedonmultipleaudiofeaturesusingagraphicalconvolutionalneuralnetwork
AT uchiyamaryunosuke classificationofdepressionanditsseveritybasedonmultipleaudiofeaturesusingagraphicalconvolutionalneuralnetwork
AT horiguchiryo classificationofdepressionanditsseveritybasedonmultipleaudiofeaturesusingagraphicalconvolutionalneuralnetwork
AT toyoshimaitsuki classificationofdepressionanditsseveritybasedonmultipleaudiofeaturesusingagraphicalconvolutionalneuralnetwork