Cargando…

Playing the pipes: acoustic sensing and machine learning for performance feedback during endotracheal intubation simulation

Objective: In emergency medicine, airway management is a core skill that includes endotracheal intubation (ETI), a common technique that can result in ineffective ventilation and laryngotracheal injury if executed incorrectly. We present a method for automatically generating performance feedback dur...

Descripción completa

Detalles Bibliográficos
Autores principales: Steffensen, Torjus L., Bartnes, Barge, Fuglstad, Maja L., Auflem, Marius, Steinert, Martin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10642916/
https://www.ncbi.nlm.nih.gov/pubmed/37965634
http://dx.doi.org/10.3389/frobt.2023.1218174
_version_ 1785147044700094464
author Steffensen, Torjus L.
Bartnes, Barge
Fuglstad, Maja L.
Auflem, Marius
Steinert, Martin
author_facet Steffensen, Torjus L.
Bartnes, Barge
Fuglstad, Maja L.
Auflem, Marius
Steinert, Martin
author_sort Steffensen, Torjus L.
collection PubMed
description Objective: In emergency medicine, airway management is a core skill that includes endotracheal intubation (ETI), a common technique that can result in ineffective ventilation and laryngotracheal injury if executed incorrectly. We present a method for automatically generating performance feedback during ETI simulator training, potentially augmenting training outcomes on robotic simulators. Method: Electret microphones recorded ultrasonic echoes pulsed through the complex geometry of a simulated airway during ETI performed on a full-size patient simulator. As the endotracheal tube is inserted deeper and the cuff is inflated, the resulting changes in geometry are reflected in the recorded signal. We trained machine learning models to classify 240 intubations distributed equally between six conditions: three insertion depths and two cuff inflation states. The best performing models were cross validated in a leave-one-subject-out scheme. Results: Best performance was achieved by transfer learning with a convolutional neural network pre-trained for sound classification, reaching global accuracy above 98% on 1-second-long audio test samples. A support vector machine trained on different features achieved a median accuracy of 85% on the full label set and 97% on a reduced label set of tube depth only. Significance: This proof-of-concept study demonstrates a method of measuring qualitative performance criteria during simulated ETI in a relatively simple way that does not damage ecological validity of the simulated anatomy. As traditional sonar is hampered by geometrical complexity compounded by the introduced equipment in ETI, the accuracy of machine learning methods in this confined design space enables application in other invasive procedures. By enabling better interaction between the human user and the robotic simulator, this approach could improve training experiences and outcomes in medical simulation for ETI as well as many other invasive clinical procedures.
format Online
Article
Text
id pubmed-10642916
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-106429162023-11-14 Playing the pipes: acoustic sensing and machine learning for performance feedback during endotracheal intubation simulation Steffensen, Torjus L. Bartnes, Barge Fuglstad, Maja L. Auflem, Marius Steinert, Martin Front Robot AI Robotics and AI Objective: In emergency medicine, airway management is a core skill that includes endotracheal intubation (ETI), a common technique that can result in ineffective ventilation and laryngotracheal injury if executed incorrectly. We present a method for automatically generating performance feedback during ETI simulator training, potentially augmenting training outcomes on robotic simulators. Method: Electret microphones recorded ultrasonic echoes pulsed through the complex geometry of a simulated airway during ETI performed on a full-size patient simulator. As the endotracheal tube is inserted deeper and the cuff is inflated, the resulting changes in geometry are reflected in the recorded signal. We trained machine learning models to classify 240 intubations distributed equally between six conditions: three insertion depths and two cuff inflation states. The best performing models were cross validated in a leave-one-subject-out scheme. Results: Best performance was achieved by transfer learning with a convolutional neural network pre-trained for sound classification, reaching global accuracy above 98% on 1-second-long audio test samples. A support vector machine trained on different features achieved a median accuracy of 85% on the full label set and 97% on a reduced label set of tube depth only. Significance: This proof-of-concept study demonstrates a method of measuring qualitative performance criteria during simulated ETI in a relatively simple way that does not damage ecological validity of the simulated anatomy. As traditional sonar is hampered by geometrical complexity compounded by the introduced equipment in ETI, the accuracy of machine learning methods in this confined design space enables application in other invasive procedures. By enabling better interaction between the human user and the robotic simulator, this approach could improve training experiences and outcomes in medical simulation for ETI as well as many other invasive clinical procedures. Frontiers Media S.A. 2023-10-30 /pmc/articles/PMC10642916/ /pubmed/37965634 http://dx.doi.org/10.3389/frobt.2023.1218174 Text en Copyright © 2023 Steffensen, Bartnes, Fuglstad, Auflem and Steinert. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Robotics and AI
Steffensen, Torjus L.
Bartnes, Barge
Fuglstad, Maja L.
Auflem, Marius
Steinert, Martin
Playing the pipes: acoustic sensing and machine learning for performance feedback during endotracheal intubation simulation
title Playing the pipes: acoustic sensing and machine learning for performance feedback during endotracheal intubation simulation
title_full Playing the pipes: acoustic sensing and machine learning for performance feedback during endotracheal intubation simulation
title_fullStr Playing the pipes: acoustic sensing and machine learning for performance feedback during endotracheal intubation simulation
title_full_unstemmed Playing the pipes: acoustic sensing and machine learning for performance feedback during endotracheal intubation simulation
title_short Playing the pipes: acoustic sensing and machine learning for performance feedback during endotracheal intubation simulation
title_sort playing the pipes: acoustic sensing and machine learning for performance feedback during endotracheal intubation simulation
topic Robotics and AI
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10642916/
https://www.ncbi.nlm.nih.gov/pubmed/37965634
http://dx.doi.org/10.3389/frobt.2023.1218174
work_keys_str_mv AT steffensentorjusl playingthepipesacousticsensingandmachinelearningforperformancefeedbackduringendotrachealintubationsimulation
AT bartnesbarge playingthepipesacousticsensingandmachinelearningforperformancefeedbackduringendotrachealintubationsimulation
AT fuglstadmajal playingthepipesacousticsensingandmachinelearningforperformancefeedbackduringendotrachealintubationsimulation
AT auflemmarius playingthepipesacousticsensingandmachinelearningforperformancefeedbackduringendotrachealintubationsimulation
AT steinertmartin playingthepipesacousticsensingandmachinelearningforperformancefeedbackduringendotrachealintubationsimulation