Cargando…
Extracting Clinical Information From Japanese Radiology Reports Using a 2-Stage Deep Learning Approach: Algorithm Development and Validation
BACKGROUND: Radiology reports are usually written in a free-text format, which makes it challenging to reuse the reports. OBJECTIVE: For secondary use, we developed a 2-stage deep learning system for extracting clinical information and converting it into a structured format. METHODS: Our system main...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
JMIR Publications Inc
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10686535/ https://www.ncbi.nlm.nih.gov/pubmed/37991979 http://dx.doi.org/10.2196/49041 |
_version_ | 1785151798513762304 |
---|---|
author | Sugimoto, Kento Wada, Shoya Konishi, Shozo Okada, Katsuki Manabe, Shirou Matsumura, Yasushi Takeda, Toshihiro |
author_facet | Sugimoto, Kento Wada, Shoya Konishi, Shozo Okada, Katsuki Manabe, Shirou Matsumura, Yasushi Takeda, Toshihiro |
author_sort | Sugimoto, Kento |
collection | PubMed |
description | BACKGROUND: Radiology reports are usually written in a free-text format, which makes it challenging to reuse the reports. OBJECTIVE: For secondary use, we developed a 2-stage deep learning system for extracting clinical information and converting it into a structured format. METHODS: Our system mainly consists of 2 deep learning modules: entity extraction and relation extraction. For each module, state-of-the-art deep learning models were applied. We trained and evaluated the models using 1040 in-house Japanese computed tomography (CT) reports annotated by medical experts. We also evaluated the performance of the entire pipeline of our system. In addition, the ratio of annotated entities in the reports was measured to validate the coverage of the clinical information with our information model. RESULTS: The microaveraged F(1)-scores of our best-performing model for entity extraction and relation extraction were 96.1% and 97.4%, respectively. The microaveraged F(1)-score of the 2-stage system, which is a measure of the performance of the entire pipeline of our system, was 91.9%. Our system showed encouraging results for the conversion of free-text radiology reports into a structured format. The coverage of clinical information in the reports was 96.2% (6595/6853). CONCLUSIONS: Our 2-stage deep system can extract clinical information from chest and abdomen CT reports accurately and comprehensively. |
format | Online Article Text |
id | pubmed-10686535 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | JMIR Publications Inc |
record_format | MEDLINE/PubMed |
spelling | pubmed-106865352023-11-30 Extracting Clinical Information From Japanese Radiology Reports Using a 2-Stage Deep Learning Approach: Algorithm Development and Validation Sugimoto, Kento Wada, Shoya Konishi, Shozo Okada, Katsuki Manabe, Shirou Matsumura, Yasushi Takeda, Toshihiro JMIR Med Inform Original Paper BACKGROUND: Radiology reports are usually written in a free-text format, which makes it challenging to reuse the reports. OBJECTIVE: For secondary use, we developed a 2-stage deep learning system for extracting clinical information and converting it into a structured format. METHODS: Our system mainly consists of 2 deep learning modules: entity extraction and relation extraction. For each module, state-of-the-art deep learning models were applied. We trained and evaluated the models using 1040 in-house Japanese computed tomography (CT) reports annotated by medical experts. We also evaluated the performance of the entire pipeline of our system. In addition, the ratio of annotated entities in the reports was measured to validate the coverage of the clinical information with our information model. RESULTS: The microaveraged F(1)-scores of our best-performing model for entity extraction and relation extraction were 96.1% and 97.4%, respectively. The microaveraged F(1)-score of the 2-stage system, which is a measure of the performance of the entire pipeline of our system, was 91.9%. Our system showed encouraging results for the conversion of free-text radiology reports into a structured format. The coverage of clinical information in the reports was 96.2% (6595/6853). CONCLUSIONS: Our 2-stage deep system can extract clinical information from chest and abdomen CT reports accurately and comprehensively. JMIR Publications Inc 2023-11-14 /pmc/articles/PMC10686535/ /pubmed/37991979 http://dx.doi.org/10.2196/49041 Text en © Kento Sugimoto, Shoya Wada, Shozo Konishi, Katsuki Okada, Shirou Manabe, Yasushi Matsumura, Toshihiro Takeda. Originally published in JMIR Medical Informatics (https://medinform.jmir.org), 14.11.2023. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on https://medinform.jmir.org/, as well as this copyright and license information must be included. |
spellingShingle | Original Paper Sugimoto, Kento Wada, Shoya Konishi, Shozo Okada, Katsuki Manabe, Shirou Matsumura, Yasushi Takeda, Toshihiro Extracting Clinical Information From Japanese Radiology Reports Using a 2-Stage Deep Learning Approach: Algorithm Development and Validation |
title | Extracting Clinical Information From Japanese Radiology Reports Using a 2-Stage Deep Learning Approach: Algorithm Development and Validation |
title_full | Extracting Clinical Information From Japanese Radiology Reports Using a 2-Stage Deep Learning Approach: Algorithm Development and Validation |
title_fullStr | Extracting Clinical Information From Japanese Radiology Reports Using a 2-Stage Deep Learning Approach: Algorithm Development and Validation |
title_full_unstemmed | Extracting Clinical Information From Japanese Radiology Reports Using a 2-Stage Deep Learning Approach: Algorithm Development and Validation |
title_short | Extracting Clinical Information From Japanese Radiology Reports Using a 2-Stage Deep Learning Approach: Algorithm Development and Validation |
title_sort | extracting clinical information from japanese radiology reports using a 2-stage deep learning approach: algorithm development and validation |
topic | Original Paper |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10686535/ https://www.ncbi.nlm.nih.gov/pubmed/37991979 http://dx.doi.org/10.2196/49041 |
work_keys_str_mv | AT sugimotokento extractingclinicalinformationfromjapaneseradiologyreportsusinga2stagedeeplearningapproachalgorithmdevelopmentandvalidation AT wadashoya extractingclinicalinformationfromjapaneseradiologyreportsusinga2stagedeeplearningapproachalgorithmdevelopmentandvalidation AT konishishozo extractingclinicalinformationfromjapaneseradiologyreportsusinga2stagedeeplearningapproachalgorithmdevelopmentandvalidation AT okadakatsuki extractingclinicalinformationfromjapaneseradiologyreportsusinga2stagedeeplearningapproachalgorithmdevelopmentandvalidation AT manabeshirou extractingclinicalinformationfromjapaneseradiologyreportsusinga2stagedeeplearningapproachalgorithmdevelopmentandvalidation AT matsumurayasushi extractingclinicalinformationfromjapaneseradiologyreportsusinga2stagedeeplearningapproachalgorithmdevelopmentandvalidation AT takedatoshihiro extractingclinicalinformationfromjapaneseradiologyreportsusinga2stagedeeplearningapproachalgorithmdevelopmentandvalidation |