Cargando…

Automatic analysis of summary statements in virtual patients - a pilot study evaluating a machine learning approach

BACKGROUND: The ability to compose a concise summary statement about a patient is a good indicator for the clinical reasoning abilities of healthcare students. To assess such summary statements manually a rubric based on five categories - use of semantic qualifiers, narrowing, transformation, accura...

Descripción completa

Detalles Bibliográficos
Autores principales: Hege, Inga, Kiesewetter, Isabel, Adler, Martin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7565765/
https://www.ncbi.nlm.nih.gov/pubmed/33066789
http://dx.doi.org/10.1186/s12909-020-02297-w
_version_ 1783596004221648896
author Hege, Inga
Kiesewetter, Isabel
Adler, Martin
author_facet Hege, Inga
Kiesewetter, Isabel
Adler, Martin
author_sort Hege, Inga
collection PubMed
description BACKGROUND: The ability to compose a concise summary statement about a patient is a good indicator for the clinical reasoning abilities of healthcare students. To assess such summary statements manually a rubric based on five categories - use of semantic qualifiers, narrowing, transformation, accuracy, and global rating has been published. Our aim was to explore whether computer-based methods can be applied to automatically assess summary statements composed by learners in virtual patient scenarios based on the available rubric in real-time to serve as a basis for immediate feedback to learners. METHODS: We randomly selected 125 summary statements in German and English composed by learners in five different virtual patient scenarios. Then we manually rated these statements based on the rubric plus an additional category for the use of the virtual patients’ name. We implemented a natural language processing approach in combination with our own algorithm to automatically assess 125 randomly selected summary statements and compared the results of the manual and automatic rating in each category. RESULTS: We found a moderate agreement of the manual and automatic rating in most of the categories. However, some further analysis and development is needed, especially for a more reliable assessment of the factual accuracy and the identification of patient names in the German statements. CONCLUSIONS: Despite some areas of improvement we believe that our results justify a careful display of the computer-calculated assessment scores as feedback to the learners. It will be important to emphasize that the rating is an approximation and give learners the possibility to complain about supposedly incorrect assessments, which will also help us to further improve the rating algorithms.
format Online
Article
Text
id pubmed-7565765
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-75657652020-10-20 Automatic analysis of summary statements in virtual patients - a pilot study evaluating a machine learning approach Hege, Inga Kiesewetter, Isabel Adler, Martin BMC Med Educ Software BACKGROUND: The ability to compose a concise summary statement about a patient is a good indicator for the clinical reasoning abilities of healthcare students. To assess such summary statements manually a rubric based on five categories - use of semantic qualifiers, narrowing, transformation, accuracy, and global rating has been published. Our aim was to explore whether computer-based methods can be applied to automatically assess summary statements composed by learners in virtual patient scenarios based on the available rubric in real-time to serve as a basis for immediate feedback to learners. METHODS: We randomly selected 125 summary statements in German and English composed by learners in five different virtual patient scenarios. Then we manually rated these statements based on the rubric plus an additional category for the use of the virtual patients’ name. We implemented a natural language processing approach in combination with our own algorithm to automatically assess 125 randomly selected summary statements and compared the results of the manual and automatic rating in each category. RESULTS: We found a moderate agreement of the manual and automatic rating in most of the categories. However, some further analysis and development is needed, especially for a more reliable assessment of the factual accuracy and the identification of patient names in the German statements. CONCLUSIONS: Despite some areas of improvement we believe that our results justify a careful display of the computer-calculated assessment scores as feedback to the learners. It will be important to emphasize that the rating is an approximation and give learners the possibility to complain about supposedly incorrect assessments, which will also help us to further improve the rating algorithms. BioMed Central 2020-10-16 /pmc/articles/PMC7565765/ /pubmed/33066789 http://dx.doi.org/10.1186/s12909-020-02297-w Text en © The Author(s) 2020 Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Software
Hege, Inga
Kiesewetter, Isabel
Adler, Martin
Automatic analysis of summary statements in virtual patients - a pilot study evaluating a machine learning approach
title Automatic analysis of summary statements in virtual patients - a pilot study evaluating a machine learning approach
title_full Automatic analysis of summary statements in virtual patients - a pilot study evaluating a machine learning approach
title_fullStr Automatic analysis of summary statements in virtual patients - a pilot study evaluating a machine learning approach
title_full_unstemmed Automatic analysis of summary statements in virtual patients - a pilot study evaluating a machine learning approach
title_short Automatic analysis of summary statements in virtual patients - a pilot study evaluating a machine learning approach
title_sort automatic analysis of summary statements in virtual patients - a pilot study evaluating a machine learning approach
topic Software
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7565765/
https://www.ncbi.nlm.nih.gov/pubmed/33066789
http://dx.doi.org/10.1186/s12909-020-02297-w
work_keys_str_mv AT hegeinga automaticanalysisofsummarystatementsinvirtualpatientsapilotstudyevaluatingamachinelearningapproach
AT kiesewetterisabel automaticanalysisofsummarystatementsinvirtualpatientsapilotstudyevaluatingamachinelearningapproach
AT adlermartin automaticanalysisofsummarystatementsinvirtualpatientsapilotstudyevaluatingamachinelearningapproach