Cargando…

Automatic depression severity assessment with deep learning using parameter-efficient tuning

INTRODUCTION: To assist mental health care providers with the assessment of depression, research to develop a standardized, accessible, and non-invasive technique has garnered considerable attention. Our study focuses on the application of deep learning models for automatic assessment of depression...

Descripción completa

Detalles Bibliográficos
Autores principales: Lau, Clinton, Zhu, Xiaodan, Chan, Wai-Yip
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10308283/
https://www.ncbi.nlm.nih.gov/pubmed/37398577
http://dx.doi.org/10.3389/fpsyt.2023.1160291
_version_ 1785066214580551680
author Lau, Clinton
Zhu, Xiaodan
Chan, Wai-Yip
author_facet Lau, Clinton
Zhu, Xiaodan
Chan, Wai-Yip
author_sort Lau, Clinton
collection PubMed
description INTRODUCTION: To assist mental health care providers with the assessment of depression, research to develop a standardized, accessible, and non-invasive technique has garnered considerable attention. Our study focuses on the application of deep learning models for automatic assessment of depression severity based on clinical interview transcriptions. Despite the recent success of deep learning, the lack of large-scale high-quality datasets is a major performance bottleneck for many mental health applications. METHODS: A novel approach is proposed to address the data scarcity problem for depression assessment. It leverages both pretrained large language models and parameter-efficient tuning techniques. The approach is built upon adapting a small set of tunable parameters, known as prefix vectors, to guide a pretrained model towards predicting the Patient Health Questionnaire (PHQ)-8 score of a person. Experiments were conducted on the Distress Analysis Interview Corpus - Wizard of Oz (DAIC-WOZ) benchmark dataset with 189 subjects, partitioned into training, development, and test sets. Model learning was done on the training set. Prediction performance mean and standard deviation of each model, with five randomly-initialized runs, were reported on the development set. Finally, optimized models were evaluated on the test set. RESULTS: The proposed model with prefix vectors outperformed all previously published methods, including models which utilized multiple types of data modalities, and achieved the best reported performance on the test set of DAIC-WOZ with a root mean square error of 4.67 and a mean absolute error of 3.80 on the PHQ-8 scale. Compared to conventionally fine-tuned baseline models, prefix-enhanced models were less prone to overfitting by using far fewer training parameters (<6% relatively). DISCUSSION: While transfer learning through pretrained large language models can provide a good starting point for downstream learning, prefix vectors can further adapt the pretrained models effectively to the depression assessment task by only adjusting a small number of parameters. The improvement is in part due to the fine-grain flexibility of prefix vector size in adjusting the model's learning capacity. Our results provide evidence that prefix-tuning can be a useful approach in developing tools for automatic depression assessment.
format Online
Article
Text
id pubmed-10308283
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-103082832023-06-30 Automatic depression severity assessment with deep learning using parameter-efficient tuning Lau, Clinton Zhu, Xiaodan Chan, Wai-Yip Front Psychiatry Psychiatry INTRODUCTION: To assist mental health care providers with the assessment of depression, research to develop a standardized, accessible, and non-invasive technique has garnered considerable attention. Our study focuses on the application of deep learning models for automatic assessment of depression severity based on clinical interview transcriptions. Despite the recent success of deep learning, the lack of large-scale high-quality datasets is a major performance bottleneck for many mental health applications. METHODS: A novel approach is proposed to address the data scarcity problem for depression assessment. It leverages both pretrained large language models and parameter-efficient tuning techniques. The approach is built upon adapting a small set of tunable parameters, known as prefix vectors, to guide a pretrained model towards predicting the Patient Health Questionnaire (PHQ)-8 score of a person. Experiments were conducted on the Distress Analysis Interview Corpus - Wizard of Oz (DAIC-WOZ) benchmark dataset with 189 subjects, partitioned into training, development, and test sets. Model learning was done on the training set. Prediction performance mean and standard deviation of each model, with five randomly-initialized runs, were reported on the development set. Finally, optimized models were evaluated on the test set. RESULTS: The proposed model with prefix vectors outperformed all previously published methods, including models which utilized multiple types of data modalities, and achieved the best reported performance on the test set of DAIC-WOZ with a root mean square error of 4.67 and a mean absolute error of 3.80 on the PHQ-8 scale. Compared to conventionally fine-tuned baseline models, prefix-enhanced models were less prone to overfitting by using far fewer training parameters (<6% relatively). DISCUSSION: While transfer learning through pretrained large language models can provide a good starting point for downstream learning, prefix vectors can further adapt the pretrained models effectively to the depression assessment task by only adjusting a small number of parameters. The improvement is in part due to the fine-grain flexibility of prefix vector size in adjusting the model's learning capacity. Our results provide evidence that prefix-tuning can be a useful approach in developing tools for automatic depression assessment. Frontiers Media S.A. 2023-06-15 /pmc/articles/PMC10308283/ /pubmed/37398577 http://dx.doi.org/10.3389/fpsyt.2023.1160291 Text en Copyright © 2023 Lau, Zhu and Chan. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Psychiatry
Lau, Clinton
Zhu, Xiaodan
Chan, Wai-Yip
Automatic depression severity assessment with deep learning using parameter-efficient tuning
title Automatic depression severity assessment with deep learning using parameter-efficient tuning
title_full Automatic depression severity assessment with deep learning using parameter-efficient tuning
title_fullStr Automatic depression severity assessment with deep learning using parameter-efficient tuning
title_full_unstemmed Automatic depression severity assessment with deep learning using parameter-efficient tuning
title_short Automatic depression severity assessment with deep learning using parameter-efficient tuning
title_sort automatic depression severity assessment with deep learning using parameter-efficient tuning
topic Psychiatry
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10308283/
https://www.ncbi.nlm.nih.gov/pubmed/37398577
http://dx.doi.org/10.3389/fpsyt.2023.1160291
work_keys_str_mv AT lauclinton automaticdepressionseverityassessmentwithdeeplearningusingparameterefficienttuning
AT zhuxiaodan automaticdepressionseverityassessmentwithdeeplearningusingparameterefficienttuning
AT chanwaiyip automaticdepressionseverityassessmentwithdeeplearningusingparameterefficienttuning