Cargando…
Racial disparities in automated speech recognition
Automated speech recognition (ASR) systems, which use sophisticated machine-learning algorithms to convert spoken language to text, have become increasingly widespread, powering popular virtual assistants, facilitating automated closed captioning, and enabling digital dictation platforms for health...
Autores principales: | , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
National Academy of Sciences
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7149386/ https://www.ncbi.nlm.nih.gov/pubmed/32205437 http://dx.doi.org/10.1073/pnas.1915768117 |
_version_ | 1783520800677036032 |
---|---|
author | Koenecke, Allison Nam, Andrew Lake, Emily Nudell, Joe Quartey, Minnie Mengesha, Zion Toups, Connor Rickford, John R. Jurafsky, Dan Goel, Sharad |
author_facet | Koenecke, Allison Nam, Andrew Lake, Emily Nudell, Joe Quartey, Minnie Mengesha, Zion Toups, Connor Rickford, John R. Jurafsky, Dan Goel, Sharad |
author_sort | Koenecke, Allison |
collection | PubMed |
description | Automated speech recognition (ASR) systems, which use sophisticated machine-learning algorithms to convert spoken language to text, have become increasingly widespread, powering popular virtual assistants, facilitating automated closed captioning, and enabling digital dictation platforms for health care. Over the last several years, the quality of these systems has dramatically improved, due both to advances in deep learning and to the collection of large-scale datasets used to train the systems. There is concern, however, that these tools do not work equally well for all subgroups of the population. Here, we examine the ability of five state-of-the-art ASR systems—developed by Amazon, Apple, Google, IBM, and Microsoft—to transcribe structured interviews conducted with 42 white speakers and 73 black speakers. In total, this corpus spans five US cities and consists of 19.8 h of audio matched on the age and gender of the speaker. We found that all five ASR systems exhibited substantial racial disparities, with an average word error rate (WER) of 0.35 for black speakers compared with 0.19 for white speakers. We trace these disparities to the underlying acoustic models used by the ASR systems as the race gap was equally large on a subset of identical phrases spoken by black and white individuals in our corpus. We conclude by proposing strategies—such as using more diverse training datasets that include African American Vernacular English—to reduce these performance differences and ensure speech recognition technology is inclusive. |
format | Online Article Text |
id | pubmed-7149386 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | National Academy of Sciences |
record_format | MEDLINE/PubMed |
spelling | pubmed-71493862020-04-15 Racial disparities in automated speech recognition Koenecke, Allison Nam, Andrew Lake, Emily Nudell, Joe Quartey, Minnie Mengesha, Zion Toups, Connor Rickford, John R. Jurafsky, Dan Goel, Sharad Proc Natl Acad Sci U S A Social Sciences Automated speech recognition (ASR) systems, which use sophisticated machine-learning algorithms to convert spoken language to text, have become increasingly widespread, powering popular virtual assistants, facilitating automated closed captioning, and enabling digital dictation platforms for health care. Over the last several years, the quality of these systems has dramatically improved, due both to advances in deep learning and to the collection of large-scale datasets used to train the systems. There is concern, however, that these tools do not work equally well for all subgroups of the population. Here, we examine the ability of five state-of-the-art ASR systems—developed by Amazon, Apple, Google, IBM, and Microsoft—to transcribe structured interviews conducted with 42 white speakers and 73 black speakers. In total, this corpus spans five US cities and consists of 19.8 h of audio matched on the age and gender of the speaker. We found that all five ASR systems exhibited substantial racial disparities, with an average word error rate (WER) of 0.35 for black speakers compared with 0.19 for white speakers. We trace these disparities to the underlying acoustic models used by the ASR systems as the race gap was equally large on a subset of identical phrases spoken by black and white individuals in our corpus. We conclude by proposing strategies—such as using more diverse training datasets that include African American Vernacular English—to reduce these performance differences and ensure speech recognition technology is inclusive. National Academy of Sciences 2020-04-07 2020-03-23 /pmc/articles/PMC7149386/ /pubmed/32205437 http://dx.doi.org/10.1073/pnas.1915768117 Text en Copyright © 2020 the Author(s). Published by PNAS. https://creativecommons.org/licenses/by-nc-nd/4.0/ https://creativecommons.org/licenses/by-nc-nd/4.0/This open access article is distributed under Creative Commons Attribution-NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND) (https://creativecommons.org/licenses/by-nc-nd/4.0/) . |
spellingShingle | Social Sciences Koenecke, Allison Nam, Andrew Lake, Emily Nudell, Joe Quartey, Minnie Mengesha, Zion Toups, Connor Rickford, John R. Jurafsky, Dan Goel, Sharad Racial disparities in automated speech recognition |
title | Racial disparities in automated speech recognition |
title_full | Racial disparities in automated speech recognition |
title_fullStr | Racial disparities in automated speech recognition |
title_full_unstemmed | Racial disparities in automated speech recognition |
title_short | Racial disparities in automated speech recognition |
title_sort | racial disparities in automated speech recognition |
topic | Social Sciences |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7149386/ https://www.ncbi.nlm.nih.gov/pubmed/32205437 http://dx.doi.org/10.1073/pnas.1915768117 |
work_keys_str_mv | AT koeneckeallison racialdisparitiesinautomatedspeechrecognition AT namandrew racialdisparitiesinautomatedspeechrecognition AT lakeemily racialdisparitiesinautomatedspeechrecognition AT nudelljoe racialdisparitiesinautomatedspeechrecognition AT quarteyminnie racialdisparitiesinautomatedspeechrecognition AT mengeshazion racialdisparitiesinautomatedspeechrecognition AT toupsconnor racialdisparitiesinautomatedspeechrecognition AT rickfordjohnr racialdisparitiesinautomatedspeechrecognition AT jurafskydan racialdisparitiesinautomatedspeechrecognition AT goelsharad racialdisparitiesinautomatedspeechrecognition |