Cargando…

Digital Omicron detection using unscripted voice samples from social media

The success of artificial intelligence in clinical environments relies upon the diversity and availability of training data. In some cases, social media data may be used to counterbalance the limited amount of accessible, well-curated clinical data, but this possibility remains largely unexplored. I...

Descripción completa

Detalles Bibliográficos
Autores principales: Anibal, James T., Landa, Adam J., Nguyen, Hang T., Peltekian, Alec K., Shin, Andrew D., Song, Miranda J., Christou, Anna S., Hazen, Lindsey A., Rivera, Jocelyne, Morhard, Robert A., Bagci, Ulas, Li, Ming, Clifton, David A., Wood, Bradford J.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Cold Spring Harbor Laboratory 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9516853/
https://www.ncbi.nlm.nih.gov/pubmed/36172131
http://dx.doi.org/10.1101/2022.09.13.22279673
_version_ 1784798794122002432
author Anibal, James T.
Landa, Adam J.
Nguyen, Hang T.
Peltekian, Alec K.
Shin, Andrew D.
Song, Miranda J.
Christou, Anna S.
Hazen, Lindsey A.
Rivera, Jocelyne
Morhard, Robert A.
Bagci, Ulas
Li, Ming
Clifton, David A.
Wood, Bradford J.
author_facet Anibal, James T.
Landa, Adam J.
Nguyen, Hang T.
Peltekian, Alec K.
Shin, Andrew D.
Song, Miranda J.
Christou, Anna S.
Hazen, Lindsey A.
Rivera, Jocelyne
Morhard, Robert A.
Bagci, Ulas
Li, Ming
Clifton, David A.
Wood, Bradford J.
author_sort Anibal, James T.
collection PubMed
description The success of artificial intelligence in clinical environments relies upon the diversity and availability of training data. In some cases, social media data may be used to counterbalance the limited amount of accessible, well-curated clinical data, but this possibility remains largely unexplored. In this study, we mined YouTube to collect voice data from individuals with self-declared positive COVID-19 tests during time periods in which Omicron was the predominant variant(1,2,3), while also sampling non-Omicron COVID-19 variants, other upper respiratory infections (URI), and healthy subjects. The resulting dataset was used to train a DenseNet model to detect the Omicron variant from voice changes. Our model achieved 0.85/0.80 specificity/sensitivity in separating Omicron samples from healthy samples and 0.76/0.70 specificity/sensitivity in separating Omicron samples from symptomatic non-COVID samples. In comparison with past studies, which used scripted voice samples, we showed that leveraging the intra-sample variance inherent to unscripted speech enhanced generalization. Our work introduced novel design paradigms for audio-based diagnostic tools and established the potential of social media data to train digital diagnostic models suitable for real-world deployment.
format Online
Article
Text
id pubmed-9516853
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Cold Spring Harbor Laboratory
record_format MEDLINE/PubMed
spelling pubmed-95168532022-12-15 Digital Omicron detection using unscripted voice samples from social media Anibal, James T. Landa, Adam J. Nguyen, Hang T. Peltekian, Alec K. Shin, Andrew D. Song, Miranda J. Christou, Anna S. Hazen, Lindsey A. Rivera, Jocelyne Morhard, Robert A. Bagci, Ulas Li, Ming Clifton, David A. Wood, Bradford J. medRxiv Article The success of artificial intelligence in clinical environments relies upon the diversity and availability of training data. In some cases, social media data may be used to counterbalance the limited amount of accessible, well-curated clinical data, but this possibility remains largely unexplored. In this study, we mined YouTube to collect voice data from individuals with self-declared positive COVID-19 tests during time periods in which Omicron was the predominant variant(1,2,3), while also sampling non-Omicron COVID-19 variants, other upper respiratory infections (URI), and healthy subjects. The resulting dataset was used to train a DenseNet model to detect the Omicron variant from voice changes. Our model achieved 0.85/0.80 specificity/sensitivity in separating Omicron samples from healthy samples and 0.76/0.70 specificity/sensitivity in separating Omicron samples from symptomatic non-COVID samples. In comparison with past studies, which used scripted voice samples, we showed that leveraging the intra-sample variance inherent to unscripted speech enhanced generalization. Our work introduced novel design paradigms for audio-based diagnostic tools and established the potential of social media data to train digital diagnostic models suitable for real-world deployment. Cold Spring Harbor Laboratory 2022-12-22 /pmc/articles/PMC9516853/ /pubmed/36172131 http://dx.doi.org/10.1101/2022.09.13.22279673 Text en https://creativecommons.org/licenses/by/4.0/This work is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/) , which allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator. The license allows for commercial use.
spellingShingle Article
Anibal, James T.
Landa, Adam J.
Nguyen, Hang T.
Peltekian, Alec K.
Shin, Andrew D.
Song, Miranda J.
Christou, Anna S.
Hazen, Lindsey A.
Rivera, Jocelyne
Morhard, Robert A.
Bagci, Ulas
Li, Ming
Clifton, David A.
Wood, Bradford J.
Digital Omicron detection using unscripted voice samples from social media
title Digital Omicron detection using unscripted voice samples from social media
title_full Digital Omicron detection using unscripted voice samples from social media
title_fullStr Digital Omicron detection using unscripted voice samples from social media
title_full_unstemmed Digital Omicron detection using unscripted voice samples from social media
title_short Digital Omicron detection using unscripted voice samples from social media
title_sort digital omicron detection using unscripted voice samples from social media
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9516853/
https://www.ncbi.nlm.nih.gov/pubmed/36172131
http://dx.doi.org/10.1101/2022.09.13.22279673
work_keys_str_mv AT anibaljamest digitalomicrondetectionusingunscriptedvoicesamplesfromsocialmedia
AT landaadamj digitalomicrondetectionusingunscriptedvoicesamplesfromsocialmedia
AT nguyenhangt digitalomicrondetectionusingunscriptedvoicesamplesfromsocialmedia
AT peltekianaleck digitalomicrondetectionusingunscriptedvoicesamplesfromsocialmedia
AT shinandrewd digitalomicrondetectionusingunscriptedvoicesamplesfromsocialmedia
AT songmirandaj digitalomicrondetectionusingunscriptedvoicesamplesfromsocialmedia
AT christouannas digitalomicrondetectionusingunscriptedvoicesamplesfromsocialmedia
AT hazenlindseya digitalomicrondetectionusingunscriptedvoicesamplesfromsocialmedia
AT riverajocelyne digitalomicrondetectionusingunscriptedvoicesamplesfromsocialmedia
AT morhardroberta digitalomicrondetectionusingunscriptedvoicesamplesfromsocialmedia
AT bagciulas digitalomicrondetectionusingunscriptedvoicesamplesfromsocialmedia
AT liming digitalomicrondetectionusingunscriptedvoicesamplesfromsocialmedia
AT cliftondavida digitalomicrondetectionusingunscriptedvoicesamplesfromsocialmedia
AT woodbradfordj digitalomicrondetectionusingunscriptedvoicesamplesfromsocialmedia