Cargando…

Transformer-based structuring of free-text radiology report databases

OBJECTIVES: To provide insights for on-site development of transformer-based structuring of free-text report databases by investigating different labeling and pre-training strategies. METHODS: A total of 93,368 German chest X-ray reports from 20,912 intensive care unit (ICU) patients were included....

Descripción completa

Detalles Bibliográficos
Autores principales: Nowak, S., Biesner, D., Layer, Y. C., Theis, M., Schneider, H., Block, W., Wulff, B., Attenberger, U. I., Sifa, R., Sprinkart, A. M.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Berlin Heidelberg 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10181962/
https://www.ncbi.nlm.nih.gov/pubmed/36905469
http://dx.doi.org/10.1007/s00330-023-09526-y
_version_ 1785041688069144576
author Nowak, S.
Biesner, D.
Layer, Y. C.
Theis, M.
Schneider, H.
Block, W.
Wulff, B.
Attenberger, U. I.
Sifa, R.
Sprinkart, A. M.
author_facet Nowak, S.
Biesner, D.
Layer, Y. C.
Theis, M.
Schneider, H.
Block, W.
Wulff, B.
Attenberger, U. I.
Sifa, R.
Sprinkart, A. M.
author_sort Nowak, S.
collection PubMed
description OBJECTIVES: To provide insights for on-site development of transformer-based structuring of free-text report databases by investigating different labeling and pre-training strategies. METHODS: A total of 93,368 German chest X-ray reports from 20,912 intensive care unit (ICU) patients were included. Two labeling strategies were investigated to tag six findings of the attending radiologist. First, a system based on human-defined rules was applied for annotation of all reports (termed “silver labels”). Second, 18,000 reports were manually annotated in 197 h (termed “gold labels”) of which 10% were used for testing. An on-site pre-trained model (T(mlm)) using masked-language modeling (MLM) was compared to a public, medically pre-trained model (T(med)). Both models were fine-tuned on silver labels only, gold labels only, and first with silver and then gold labels (hybrid training) for text classification, using varying numbers (N: 500, 1000, 2000, 3500, 7000, 14,580) of gold labels. Macro-averaged F1-scores (MAF1) in percent were calculated with 95% confidence intervals (CI). RESULTS: T(mlm,gold) (95.5 [94.5–96.3]) showed significantly higher MAF1 than T(med,silver) (75.0 [73.4–76.5]) and T(mlm,silver) (75.2 [73.6–76.7]), but not significantly higher MAF1 than T(med,gold) (94.7 [93.6–95.6]), T(med,hybrid) (94.9 [93.9–95.8]), and T(mlm,hybrid) (95.2 [94.3–96.0]). When using 7000 or less gold-labeled reports, T(mlm,gold) (N: 7000, 94.7 [93.5–95.7]) showed significantly higher MAF1 than T(med,gold) (N: 7000, 91.5 [90.0–92.8]). With at least 2000 gold-labeled reports, utilizing silver labels did not lead to significant improvement of T(mlm,hybrid) (N: 2000, 91.8 [90.4–93.2]) over T(mlm,gold) (N: 2000, 91.4 [89.9–92.8]). CONCLUSIONS: Custom pre-training of transformers and fine-tuning on manual annotations promises to be an efficient strategy to unlock report databases for data-driven medicine. KEY POINTS: • On-site development of natural language processing methods that retrospectively unlock free-text databases of radiology clinics for data-driven medicine is of great interest. • For clinics seeking to develop methods on-site for retrospective structuring of a report database of a certain department, it remains unclear which of previously proposed strategies for labeling reports and pre-training models is the most appropriate in context of, e.g., available annotator time. • Using a custom pre-trained transformer model, along with a little annotation effort, promises to be an efficient way to retrospectively structure radiological databases, even if not millions of reports are available for pre-training. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00330-023-09526-y.
format Online
Article
Text
id pubmed-10181962
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Springer Berlin Heidelberg
record_format MEDLINE/PubMed
spelling pubmed-101819622023-05-14 Transformer-based structuring of free-text radiology report databases Nowak, S. Biesner, D. Layer, Y. C. Theis, M. Schneider, H. Block, W. Wulff, B. Attenberger, U. I. Sifa, R. Sprinkart, A. M. Eur Radiol Imaging Informatics and Artificial Intelligence OBJECTIVES: To provide insights for on-site development of transformer-based structuring of free-text report databases by investigating different labeling and pre-training strategies. METHODS: A total of 93,368 German chest X-ray reports from 20,912 intensive care unit (ICU) patients were included. Two labeling strategies were investigated to tag six findings of the attending radiologist. First, a system based on human-defined rules was applied for annotation of all reports (termed “silver labels”). Second, 18,000 reports were manually annotated in 197 h (termed “gold labels”) of which 10% were used for testing. An on-site pre-trained model (T(mlm)) using masked-language modeling (MLM) was compared to a public, medically pre-trained model (T(med)). Both models were fine-tuned on silver labels only, gold labels only, and first with silver and then gold labels (hybrid training) for text classification, using varying numbers (N: 500, 1000, 2000, 3500, 7000, 14,580) of gold labels. Macro-averaged F1-scores (MAF1) in percent were calculated with 95% confidence intervals (CI). RESULTS: T(mlm,gold) (95.5 [94.5–96.3]) showed significantly higher MAF1 than T(med,silver) (75.0 [73.4–76.5]) and T(mlm,silver) (75.2 [73.6–76.7]), but not significantly higher MAF1 than T(med,gold) (94.7 [93.6–95.6]), T(med,hybrid) (94.9 [93.9–95.8]), and T(mlm,hybrid) (95.2 [94.3–96.0]). When using 7000 or less gold-labeled reports, T(mlm,gold) (N: 7000, 94.7 [93.5–95.7]) showed significantly higher MAF1 than T(med,gold) (N: 7000, 91.5 [90.0–92.8]). With at least 2000 gold-labeled reports, utilizing silver labels did not lead to significant improvement of T(mlm,hybrid) (N: 2000, 91.8 [90.4–93.2]) over T(mlm,gold) (N: 2000, 91.4 [89.9–92.8]). CONCLUSIONS: Custom pre-training of transformers and fine-tuning on manual annotations promises to be an efficient strategy to unlock report databases for data-driven medicine. KEY POINTS: • On-site development of natural language processing methods that retrospectively unlock free-text databases of radiology clinics for data-driven medicine is of great interest. • For clinics seeking to develop methods on-site for retrospective structuring of a report database of a certain department, it remains unclear which of previously proposed strategies for labeling reports and pre-training models is the most appropriate in context of, e.g., available annotator time. • Using a custom pre-trained transformer model, along with a little annotation effort, promises to be an efficient way to retrospectively structure radiological databases, even if not millions of reports are available for pre-training. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00330-023-09526-y. Springer Berlin Heidelberg 2023-03-11 2023 /pmc/articles/PMC10181962/ /pubmed/36905469 http://dx.doi.org/10.1007/s00330-023-09526-y Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Imaging Informatics and Artificial Intelligence
Nowak, S.
Biesner, D.
Layer, Y. C.
Theis, M.
Schneider, H.
Block, W.
Wulff, B.
Attenberger, U. I.
Sifa, R.
Sprinkart, A. M.
Transformer-based structuring of free-text radiology report databases
title Transformer-based structuring of free-text radiology report databases
title_full Transformer-based structuring of free-text radiology report databases
title_fullStr Transformer-based structuring of free-text radiology report databases
title_full_unstemmed Transformer-based structuring of free-text radiology report databases
title_short Transformer-based structuring of free-text radiology report databases
title_sort transformer-based structuring of free-text radiology report databases
topic Imaging Informatics and Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10181962/
https://www.ncbi.nlm.nih.gov/pubmed/36905469
http://dx.doi.org/10.1007/s00330-023-09526-y
work_keys_str_mv AT nowaks transformerbasedstructuringoffreetextradiologyreportdatabases
AT biesnerd transformerbasedstructuringoffreetextradiologyreportdatabases
AT layeryc transformerbasedstructuringoffreetextradiologyreportdatabases
AT theism transformerbasedstructuringoffreetextradiologyreportdatabases
AT schneiderh transformerbasedstructuringoffreetextradiologyreportdatabases
AT blockw transformerbasedstructuringoffreetextradiologyreportdatabases
AT wulffb transformerbasedstructuringoffreetextradiologyreportdatabases
AT attenbergerui transformerbasedstructuringoffreetextradiologyreportdatabases
AT sifar transformerbasedstructuringoffreetextradiologyreportdatabases
AT sprinkartam transformerbasedstructuringoffreetextradiologyreportdatabases