Cargando…

Using Acoustic Speech Patterns From Smartphones to Investigate Mood Disorders: Scoping Review

BACKGROUND: Mood disorders are commonly underrecognized and undertreated, as diagnosis is reliant on self-reporting and clinical assessments that are often not timely. Speech characteristics of those with mood disorders differs from healthy individuals. With the wide use of smartphones, and the emer...

Descripción completa

Detalles Bibliográficos
Autores principales: Flanagan, Olivia, Chan, Amy, Roop, Partha, Sundram, Frederick
Formato: Online Artículo Texto
Lenguaje:English
Publicado: JMIR Publications 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8486998/
https://www.ncbi.nlm.nih.gov/pubmed/34533465
http://dx.doi.org/10.2196/24352
_version_ 1784577862805749760
author Flanagan, Olivia
Chan, Amy
Roop, Partha
Sundram, Frederick
author_facet Flanagan, Olivia
Chan, Amy
Roop, Partha
Sundram, Frederick
author_sort Flanagan, Olivia
collection PubMed
description BACKGROUND: Mood disorders are commonly underrecognized and undertreated, as diagnosis is reliant on self-reporting and clinical assessments that are often not timely. Speech characteristics of those with mood disorders differs from healthy individuals. With the wide use of smartphones, and the emergence of machine learning approaches, smartphones can be used to monitor speech patterns to help the diagnosis and monitoring of mood disorders. OBJECTIVE: The aim of this review is to synthesize research on using speech patterns from smartphones to diagnose and monitor mood disorders. METHODS: Literature searches of major databases, Medline, PsycInfo, EMBASE, and CINAHL, initially identified 832 relevant articles using the search terms “mood disorders”, “smartphone”, “voice analysis”, and their variants. Only 13 studies met inclusion criteria: use of a smartphone for capturing voice data, focus on diagnosing or monitoring a mood disorder(s), clinical populations recruited prospectively, and in the English language only. Articles were assessed by 2 reviewers, and data extracted included data type, classifiers used, methods of capture, and study results. Studies were analyzed using a narrative synthesis approach. RESULTS: Studies showed that voice data alone had reasonable accuracy in predicting mood states and mood fluctuations based on objectively monitored speech patterns. While a fusion of different sensor modalities revealed the highest accuracy (97.4%), nearly 80% of included studies were pilot trials or feasibility studies without control groups and had small sample sizes ranging from 1 to 73 participants. Studies were also carried out over short or varying timeframes and had significant heterogeneity of methods in terms of the types of audio data captured, environmental contexts, classifiers, and measures to control for privacy and ambient noise. CONCLUSIONS: Approaches that allow smartphone-based monitoring of speech patterns in mood disorders are rapidly growing. The current body of evidence supports the value of speech patterns to monitor, classify, and predict mood states in real time. However, many challenges remain around the robustness, cost-effectiveness, and acceptability of such an approach and further work is required to build on current research and reduce heterogeneity of methodologies as well as clinical evaluation of the benefits and risks of such approaches.
format Online
Article
Text
id pubmed-8486998
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher JMIR Publications
record_format MEDLINE/PubMed
spelling pubmed-84869982021-10-18 Using Acoustic Speech Patterns From Smartphones to Investigate Mood Disorders: Scoping Review Flanagan, Olivia Chan, Amy Roop, Partha Sundram, Frederick JMIR Mhealth Uhealth Review BACKGROUND: Mood disorders are commonly underrecognized and undertreated, as diagnosis is reliant on self-reporting and clinical assessments that are often not timely. Speech characteristics of those with mood disorders differs from healthy individuals. With the wide use of smartphones, and the emergence of machine learning approaches, smartphones can be used to monitor speech patterns to help the diagnosis and monitoring of mood disorders. OBJECTIVE: The aim of this review is to synthesize research on using speech patterns from smartphones to diagnose and monitor mood disorders. METHODS: Literature searches of major databases, Medline, PsycInfo, EMBASE, and CINAHL, initially identified 832 relevant articles using the search terms “mood disorders”, “smartphone”, “voice analysis”, and their variants. Only 13 studies met inclusion criteria: use of a smartphone for capturing voice data, focus on diagnosing or monitoring a mood disorder(s), clinical populations recruited prospectively, and in the English language only. Articles were assessed by 2 reviewers, and data extracted included data type, classifiers used, methods of capture, and study results. Studies were analyzed using a narrative synthesis approach. RESULTS: Studies showed that voice data alone had reasonable accuracy in predicting mood states and mood fluctuations based on objectively monitored speech patterns. While a fusion of different sensor modalities revealed the highest accuracy (97.4%), nearly 80% of included studies were pilot trials or feasibility studies without control groups and had small sample sizes ranging from 1 to 73 participants. Studies were also carried out over short or varying timeframes and had significant heterogeneity of methods in terms of the types of audio data captured, environmental contexts, classifiers, and measures to control for privacy and ambient noise. CONCLUSIONS: Approaches that allow smartphone-based monitoring of speech patterns in mood disorders are rapidly growing. The current body of evidence supports the value of speech patterns to monitor, classify, and predict mood states in real time. However, many challenges remain around the robustness, cost-effectiveness, and acceptability of such an approach and further work is required to build on current research and reduce heterogeneity of methodologies as well as clinical evaluation of the benefits and risks of such approaches. JMIR Publications 2021-09-17 /pmc/articles/PMC8486998/ /pubmed/34533465 http://dx.doi.org/10.2196/24352 Text en ©Olivia Flanagan, Amy Chan, Partha Roop, Frederick Sundram. Originally published in JMIR mHealth and uHealth (https://mhealth.jmir.org), 17.09.2021. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR mHealth and uHealth, is properly cited. The complete bibliographic information, a link to the original publication on https://mhealth.jmir.org/, as well as this copyright and license information must be included.
spellingShingle Review
Flanagan, Olivia
Chan, Amy
Roop, Partha
Sundram, Frederick
Using Acoustic Speech Patterns From Smartphones to Investigate Mood Disorders: Scoping Review
title Using Acoustic Speech Patterns From Smartphones to Investigate Mood Disorders: Scoping Review
title_full Using Acoustic Speech Patterns From Smartphones to Investigate Mood Disorders: Scoping Review
title_fullStr Using Acoustic Speech Patterns From Smartphones to Investigate Mood Disorders: Scoping Review
title_full_unstemmed Using Acoustic Speech Patterns From Smartphones to Investigate Mood Disorders: Scoping Review
title_short Using Acoustic Speech Patterns From Smartphones to Investigate Mood Disorders: Scoping Review
title_sort using acoustic speech patterns from smartphones to investigate mood disorders: scoping review
topic Review
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8486998/
https://www.ncbi.nlm.nih.gov/pubmed/34533465
http://dx.doi.org/10.2196/24352
work_keys_str_mv AT flanaganolivia usingacousticspeechpatternsfromsmartphonestoinvestigatemooddisordersscopingreview
AT chanamy usingacousticspeechpatternsfromsmartphonestoinvestigatemooddisordersscopingreview
AT rooppartha usingacousticspeechpatternsfromsmartphonestoinvestigatemooddisordersscopingreview
AT sundramfrederick usingacousticspeechpatternsfromsmartphonestoinvestigatemooddisordersscopingreview