Cargando…

Two-stage visual speech recognition for intensive care patients

In this work, we propose a framework to enhance the communication abilities of speech-impaired patients in an intensive care setting via reading lips. Medical procedure, such as a tracheotomy, causes the patient to lose the ability to utter speech with little to no impact on the habitual lip movemen...

Descripción completa

Detalles Bibliográficos
Autores principales: Laux, Hendrik, Hallawa, Ahmed, Assis, Julio Cesar Sevarolli, Schmeink, Anke, Martin, Lukas, Peine, Arne
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9844948/
https://www.ncbi.nlm.nih.gov/pubmed/36650188
http://dx.doi.org/10.1038/s41598-022-26155-5
_version_ 1784870775653662720
author Laux, Hendrik
Hallawa, Ahmed
Assis, Julio Cesar Sevarolli
Schmeink, Anke
Martin, Lukas
Peine, Arne
author_facet Laux, Hendrik
Hallawa, Ahmed
Assis, Julio Cesar Sevarolli
Schmeink, Anke
Martin, Lukas
Peine, Arne
author_sort Laux, Hendrik
collection PubMed
description In this work, we propose a framework to enhance the communication abilities of speech-impaired patients in an intensive care setting via reading lips. Medical procedure, such as a tracheotomy, causes the patient to lose the ability to utter speech with little to no impact on the habitual lip movement. Consequently, we developed a framework to predict the silently spoken text by performing visual speech recognition, i.e., lip-reading. In a two-stage architecture, frames of the patient’s face are used to infer audio features as an intermediate prediction target, which are then used to predict the uttered text. To the best of our knowledge, this is the first approach to bring visual speech recognition into an intensive care setting. For this purpose, we recorded an audio-visual dataset in the University Hospital of Aachen’s intensive care unit (ICU) with a language corpus hand-picked by experienced clinicians to be representative of their day-to-day routine. With a word error rate of 6.3%, the trained system reaches a sufficient overall performance to significantly increase the quality of communication between patient and clinician or relatives.
format Online
Article
Text
id pubmed-9844948
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-98449482023-01-18 Two-stage visual speech recognition for intensive care patients Laux, Hendrik Hallawa, Ahmed Assis, Julio Cesar Sevarolli Schmeink, Anke Martin, Lukas Peine, Arne Sci Rep Article In this work, we propose a framework to enhance the communication abilities of speech-impaired patients in an intensive care setting via reading lips. Medical procedure, such as a tracheotomy, causes the patient to lose the ability to utter speech with little to no impact on the habitual lip movement. Consequently, we developed a framework to predict the silently spoken text by performing visual speech recognition, i.e., lip-reading. In a two-stage architecture, frames of the patient’s face are used to infer audio features as an intermediate prediction target, which are then used to predict the uttered text. To the best of our knowledge, this is the first approach to bring visual speech recognition into an intensive care setting. For this purpose, we recorded an audio-visual dataset in the University Hospital of Aachen’s intensive care unit (ICU) with a language corpus hand-picked by experienced clinicians to be representative of their day-to-day routine. With a word error rate of 6.3%, the trained system reaches a sufficient overall performance to significantly increase the quality of communication between patient and clinician or relatives. Nature Publishing Group UK 2023-01-17 /pmc/articles/PMC9844948/ /pubmed/36650188 http://dx.doi.org/10.1038/s41598-022-26155-5 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Laux, Hendrik
Hallawa, Ahmed
Assis, Julio Cesar Sevarolli
Schmeink, Anke
Martin, Lukas
Peine, Arne
Two-stage visual speech recognition for intensive care patients
title Two-stage visual speech recognition for intensive care patients
title_full Two-stage visual speech recognition for intensive care patients
title_fullStr Two-stage visual speech recognition for intensive care patients
title_full_unstemmed Two-stage visual speech recognition for intensive care patients
title_short Two-stage visual speech recognition for intensive care patients
title_sort two-stage visual speech recognition for intensive care patients
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9844948/
https://www.ncbi.nlm.nih.gov/pubmed/36650188
http://dx.doi.org/10.1038/s41598-022-26155-5
work_keys_str_mv AT lauxhendrik twostagevisualspeechrecognitionforintensivecarepatients
AT hallawaahmed twostagevisualspeechrecognitionforintensivecarepatients
AT assisjuliocesarsevarolli twostagevisualspeechrecognitionforintensivecarepatients
AT schmeinkanke twostagevisualspeechrecognitionforintensivecarepatients
AT martinlukas twostagevisualspeechrecognitionforintensivecarepatients
AT peinearne twostagevisualspeechrecognitionforintensivecarepatients