Cargando…
Initial Development of an Automated Platform for Assessing Trainee Performance on Case Presentations
BACKGROUND: Oral case presentation is a crucial skill of physicians and a key component of team-based care. However, consistent and objective assessment and feedback on presentations during training are infrequent. OBJECTIVE: To determine the potential value of applying natural language processing,...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
American Thoracic Society
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9886197/ https://www.ncbi.nlm.nih.gov/pubmed/36726701 http://dx.doi.org/10.34197/ats-scholar.2022-0010OC |
_version_ | 1784880083937263616 |
---|---|
author | King, Andrew J. Kahn, Jeremy M. Brant, Emily B. Cooper, Gregory F. Mowery, Danielle L. |
author_facet | King, Andrew J. Kahn, Jeremy M. Brant, Emily B. Cooper, Gregory F. Mowery, Danielle L. |
author_sort | King, Andrew J. |
collection | PubMed |
description | BACKGROUND: Oral case presentation is a crucial skill of physicians and a key component of team-based care. However, consistent and objective assessment and feedback on presentations during training are infrequent. OBJECTIVE: To determine the potential value of applying natural language processing, computer software that extracts meaning from text, to transcripts of oral case presentations as a strategy to assess their quality automatically and objectively. METHODS: We transcribed a collection of simulated oral case presentations. The presentations were from eight critical care fellows and one critical care attending. They were instructed to review the medical charts of 11 real intensive care unit patient cases and to audio record themselves, presenting each case as if they were doing so on morning rounds. We then used natural language processing to convert the transcripts from human-readable text into machine-readable numbers. These numbers represent details of the presentation style and content. The distance between the numeric representation of two different transcripts negatively correlates with the similarity of those two transcripts. We ranked fellows on the basis of how similar their presentations were to the attending’s presentations. RESULTS: The 99 presentations included 260 minutes of audio (mean length: 2.6 ± 1.24 min per case). On average, 23.88 ± 2.65 sentences were spoken, and each sentence had 14.10 ± 0.67 words, 3.62 ± 0.15 medical concepts, and 0.75 ± 0.09 medical adjectives. When ranking fellows on the basis of how similar their presentations were to the attending’s presentation, we found a gap between the five fellows with the most similar presentations and the three fellows with the least similar presentations (average group similarity scores of 0.62 ± 0.01 and 0.53 ± 0.01, respectively). Rankings were sensitive to whether presentation style or content information were weighted more heavily when calculating transcript similarity. CONCLUSION: Natural language processing enabled the ranking of case presentations on the basis of how similar they were to a reference presentation. Although additional work is needed to convert these rankings, and underlying similarity scores, into actionable feedback for trainees, these methods may support new tools for improving medical education. |
format | Online Article Text |
id | pubmed-9886197 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | American Thoracic Society |
record_format | MEDLINE/PubMed |
spelling | pubmed-98861972023-01-31 Initial Development of an Automated Platform for Assessing Trainee Performance on Case Presentations King, Andrew J. Kahn, Jeremy M. Brant, Emily B. Cooper, Gregory F. Mowery, Danielle L. ATS Sch Original Research BACKGROUND: Oral case presentation is a crucial skill of physicians and a key component of team-based care. However, consistent and objective assessment and feedback on presentations during training are infrequent. OBJECTIVE: To determine the potential value of applying natural language processing, computer software that extracts meaning from text, to transcripts of oral case presentations as a strategy to assess their quality automatically and objectively. METHODS: We transcribed a collection of simulated oral case presentations. The presentations were from eight critical care fellows and one critical care attending. They were instructed to review the medical charts of 11 real intensive care unit patient cases and to audio record themselves, presenting each case as if they were doing so on morning rounds. We then used natural language processing to convert the transcripts from human-readable text into machine-readable numbers. These numbers represent details of the presentation style and content. The distance between the numeric representation of two different transcripts negatively correlates with the similarity of those two transcripts. We ranked fellows on the basis of how similar their presentations were to the attending’s presentations. RESULTS: The 99 presentations included 260 minutes of audio (mean length: 2.6 ± 1.24 min per case). On average, 23.88 ± 2.65 sentences were spoken, and each sentence had 14.10 ± 0.67 words, 3.62 ± 0.15 medical concepts, and 0.75 ± 0.09 medical adjectives. When ranking fellows on the basis of how similar their presentations were to the attending’s presentation, we found a gap between the five fellows with the most similar presentations and the three fellows with the least similar presentations (average group similarity scores of 0.62 ± 0.01 and 0.53 ± 0.01, respectively). Rankings were sensitive to whether presentation style or content information were weighted more heavily when calculating transcript similarity. CONCLUSION: Natural language processing enabled the ranking of case presentations on the basis of how similar they were to a reference presentation. Although additional work is needed to convert these rankings, and underlying similarity scores, into actionable feedback for trainees, these methods may support new tools for improving medical education. American Thoracic Society 2022-09-23 /pmc/articles/PMC9886197/ /pubmed/36726701 http://dx.doi.org/10.34197/ats-scholar.2022-0010OC Text en Copyright © 2022 by the American Thoracic Society https://creativecommons.org/licenses/by-nc-nd/4.0/This article is open access and distributed under the terms of the Creative Commons Attribution Non-Commercial No Derivatives License 4.0 (https://creativecommons.org/licenses/by-nc-nd/4.0/) . For commercial usage and reprints, please e-mail Diane Gern. |
spellingShingle | Original Research King, Andrew J. Kahn, Jeremy M. Brant, Emily B. Cooper, Gregory F. Mowery, Danielle L. Initial Development of an Automated Platform for Assessing Trainee Performance on Case Presentations |
title | Initial Development of an Automated Platform for Assessing Trainee
Performance on Case Presentations |
title_full | Initial Development of an Automated Platform for Assessing Trainee
Performance on Case Presentations |
title_fullStr | Initial Development of an Automated Platform for Assessing Trainee
Performance on Case Presentations |
title_full_unstemmed | Initial Development of an Automated Platform for Assessing Trainee
Performance on Case Presentations |
title_short | Initial Development of an Automated Platform for Assessing Trainee
Performance on Case Presentations |
title_sort | initial development of an automated platform for assessing trainee
performance on case presentations |
topic | Original Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9886197/ https://www.ncbi.nlm.nih.gov/pubmed/36726701 http://dx.doi.org/10.34197/ats-scholar.2022-0010OC |
work_keys_str_mv | AT kingandrewj initialdevelopmentofanautomatedplatformforassessingtraineeperformanceoncasepresentations AT kahnjeremym initialdevelopmentofanautomatedplatformforassessingtraineeperformanceoncasepresentations AT brantemilyb initialdevelopmentofanautomatedplatformforassessingtraineeperformanceoncasepresentations AT coopergregoryf initialdevelopmentofanautomatedplatformforassessingtraineeperformanceoncasepresentations AT mowerydaniellel initialdevelopmentofanautomatedplatformforassessingtraineeperformanceoncasepresentations |