Cargando…

Natural language processing for automatic evaluation of free-text answers — a feasibility study based on the European Diploma in Radiology examination

BACKGROUND: Written medical examinations consist of multiple-choice questions and/or free-text answers. The latter require manual evaluation and rating, which is time-consuming and potentially error-prone. We tested whether natural language processing (NLP) can be used to automatically analyze free-...

Descripción completa

Detalles Bibliográficos
Autores principales: Stoehr, Fabian, Kämpgen, Benedikt, Müller, Lukas, Zufiría, Laura Oleaga, Junquero, Vanesa, Merino, Cristina, Mildenberger, Peter, Kloeckner, Roman
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Vienna 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10509084/
https://www.ncbi.nlm.nih.gov/pubmed/37726485
http://dx.doi.org/10.1186/s13244-023-01507-5
_version_ 1785107665276370944
author Stoehr, Fabian
Kämpgen, Benedikt
Müller, Lukas
Zufiría, Laura Oleaga
Junquero, Vanesa
Merino, Cristina
Mildenberger, Peter
Kloeckner, Roman
author_facet Stoehr, Fabian
Kämpgen, Benedikt
Müller, Lukas
Zufiría, Laura Oleaga
Junquero, Vanesa
Merino, Cristina
Mildenberger, Peter
Kloeckner, Roman
author_sort Stoehr, Fabian
collection PubMed
description BACKGROUND: Written medical examinations consist of multiple-choice questions and/or free-text answers. The latter require manual evaluation and rating, which is time-consuming and potentially error-prone. We tested whether natural language processing (NLP) can be used to automatically analyze free-text answers to support the review process. METHODS: The European Board of Radiology of the European Society of Radiology provided representative datasets comprising sample questions, answer keys, participant answers, and reviewer markings from European Diploma in Radiology examinations. Three free-text questions with the highest number of corresponding answers were selected: Questions 1 and 2 were “unstructured” and required a typical free-text answer whereas question 3 was “structured” and offered a selection of predefined wordings/phrases for participants to use in their free-text answer. The NLP engine was designed using word lists, rule-based synonyms, and decision tree learning based on the answer keys and its performance tested against the gold standard of reviewer markings. RESULTS: After implementing the NLP approach in Python, F1 scores were calculated as a measure of NLP performance: 0.26 (unstructured question 1, n = 96), 0.33 (unstructured question 2, n = 327), and 0.5 (more structured question, n = 111). The respective precision/recall values were 0.26/0.27, 0.4/0.32, and 0.62/0.55. CONCLUSION: This study showed the successful design of an NLP-based approach for automatic evaluation of free-text answers in the EDiR examination. Thus, as a future field of application, NLP could work as a decision-support system for reviewers and support the design of examinations being adjusted to the requirements of an automated, NLP-based review process. CLINICAL RELEVANCE STATEMENT: Natural language processing can be successfully used to automatically evaluate free-text answers, performing better with more structured question-answer formats. Furthermore, this study provides a baseline for further work applying, e.g., more elaborated NLP approaches/large language models. KEY POINTS: • Free-text answers require manual evaluation, which is time-consuming and potentially error-prone. • We developed a simple NLP-based approach — requiring only minimal effort/modeling — to automatically analyze and mark free-text answers. • Our NLP engine has the potential to support the manual evaluation process. • NLP performance is better on a more structured question-answer format. GRAPHICAL ABSTRACT: [Image: see text] SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s13244-023-01507-5.
format Online
Article
Text
id pubmed-10509084
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Springer Vienna
record_format MEDLINE/PubMed
spelling pubmed-105090842023-09-21 Natural language processing for automatic evaluation of free-text answers — a feasibility study based on the European Diploma in Radiology examination Stoehr, Fabian Kämpgen, Benedikt Müller, Lukas Zufiría, Laura Oleaga Junquero, Vanesa Merino, Cristina Mildenberger, Peter Kloeckner, Roman Insights Imaging Original Article BACKGROUND: Written medical examinations consist of multiple-choice questions and/or free-text answers. The latter require manual evaluation and rating, which is time-consuming and potentially error-prone. We tested whether natural language processing (NLP) can be used to automatically analyze free-text answers to support the review process. METHODS: The European Board of Radiology of the European Society of Radiology provided representative datasets comprising sample questions, answer keys, participant answers, and reviewer markings from European Diploma in Radiology examinations. Three free-text questions with the highest number of corresponding answers were selected: Questions 1 and 2 were “unstructured” and required a typical free-text answer whereas question 3 was “structured” and offered a selection of predefined wordings/phrases for participants to use in their free-text answer. The NLP engine was designed using word lists, rule-based synonyms, and decision tree learning based on the answer keys and its performance tested against the gold standard of reviewer markings. RESULTS: After implementing the NLP approach in Python, F1 scores were calculated as a measure of NLP performance: 0.26 (unstructured question 1, n = 96), 0.33 (unstructured question 2, n = 327), and 0.5 (more structured question, n = 111). The respective precision/recall values were 0.26/0.27, 0.4/0.32, and 0.62/0.55. CONCLUSION: This study showed the successful design of an NLP-based approach for automatic evaluation of free-text answers in the EDiR examination. Thus, as a future field of application, NLP could work as a decision-support system for reviewers and support the design of examinations being adjusted to the requirements of an automated, NLP-based review process. CLINICAL RELEVANCE STATEMENT: Natural language processing can be successfully used to automatically evaluate free-text answers, performing better with more structured question-answer formats. Furthermore, this study provides a baseline for further work applying, e.g., more elaborated NLP approaches/large language models. KEY POINTS: • Free-text answers require manual evaluation, which is time-consuming and potentially error-prone. • We developed a simple NLP-based approach — requiring only minimal effort/modeling — to automatically analyze and mark free-text answers. • Our NLP engine has the potential to support the manual evaluation process. • NLP performance is better on a more structured question-answer format. GRAPHICAL ABSTRACT: [Image: see text] SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s13244-023-01507-5. Springer Vienna 2023-09-19 /pmc/articles/PMC10509084/ /pubmed/37726485 http://dx.doi.org/10.1186/s13244-023-01507-5 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Original Article
Stoehr, Fabian
Kämpgen, Benedikt
Müller, Lukas
Zufiría, Laura Oleaga
Junquero, Vanesa
Merino, Cristina
Mildenberger, Peter
Kloeckner, Roman
Natural language processing for automatic evaluation of free-text answers — a feasibility study based on the European Diploma in Radiology examination
title Natural language processing for automatic evaluation of free-text answers — a feasibility study based on the European Diploma in Radiology examination
title_full Natural language processing for automatic evaluation of free-text answers — a feasibility study based on the European Diploma in Radiology examination
title_fullStr Natural language processing for automatic evaluation of free-text answers — a feasibility study based on the European Diploma in Radiology examination
title_full_unstemmed Natural language processing for automatic evaluation of free-text answers — a feasibility study based on the European Diploma in Radiology examination
title_short Natural language processing for automatic evaluation of free-text answers — a feasibility study based on the European Diploma in Radiology examination
title_sort natural language processing for automatic evaluation of free-text answers — a feasibility study based on the european diploma in radiology examination
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10509084/
https://www.ncbi.nlm.nih.gov/pubmed/37726485
http://dx.doi.org/10.1186/s13244-023-01507-5
work_keys_str_mv AT stoehrfabian naturallanguageprocessingforautomaticevaluationoffreetextanswersafeasibilitystudybasedontheeuropeandiplomainradiologyexamination
AT kampgenbenedikt naturallanguageprocessingforautomaticevaluationoffreetextanswersafeasibilitystudybasedontheeuropeandiplomainradiologyexamination
AT mullerlukas naturallanguageprocessingforautomaticevaluationoffreetextanswersafeasibilitystudybasedontheeuropeandiplomainradiologyexamination
AT zufirialauraoleaga naturallanguageprocessingforautomaticevaluationoffreetextanswersafeasibilitystudybasedontheeuropeandiplomainradiologyexamination
AT junquerovanesa naturallanguageprocessingforautomaticevaluationoffreetextanswersafeasibilitystudybasedontheeuropeandiplomainradiologyexamination
AT merinocristina naturallanguageprocessingforautomaticevaluationoffreetextanswersafeasibilitystudybasedontheeuropeandiplomainradiologyexamination
AT mildenbergerpeter naturallanguageprocessingforautomaticevaluationoffreetextanswersafeasibilitystudybasedontheeuropeandiplomainradiologyexamination
AT kloecknerroman naturallanguageprocessingforautomaticevaluationoffreetextanswersafeasibilitystudybasedontheeuropeandiplomainradiologyexamination