Cargando…
Deep learning-based cardiothoracic ratio measurement on chest radiograph: accuracy improvement without self-annotation
BACKGROUND: A reproducible and accurate automated approach to measuring cardiothoracic ratio on chest radiographs is warranted. This study aimed to develop a deep learning-based model for estimating the cardiothoracic ratio on chest radiographs without requiring self-annotation and to compare its re...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
AME Publishing Company
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10585545/ https://www.ncbi.nlm.nih.gov/pubmed/37869343 http://dx.doi.org/10.21037/qims-23-187 |
_version_ | 1785122977298251776 |
---|---|
author | Yoshida, Kotaro Takamatsu, Atsushi Matsubara, Takashi Kitagawa, Taichi Toshima, Fomihito Tanaka, Rie Gabata, Toshifumi |
author_facet | Yoshida, Kotaro Takamatsu, Atsushi Matsubara, Takashi Kitagawa, Taichi Toshima, Fomihito Tanaka, Rie Gabata, Toshifumi |
author_sort | Yoshida, Kotaro |
collection | PubMed |
description | BACKGROUND: A reproducible and accurate automated approach to measuring cardiothoracic ratio on chest radiographs is warranted. This study aimed to develop a deep learning-based model for estimating the cardiothoracic ratio on chest radiographs without requiring self-annotation and to compare its results with those of manual measurements. METHODS: The U-net architecture was designed to segment the right and left lungs and the cardiac shadow, from chest radiographs. The cardiothoracic ratio was then calculated using these labels by a mathematical algorithm. The initial model of deep learning-based cardiothoracic ratio measurement was developed using open-source 247 chest radiographs that had already been annotated. The advanced model was developed using a training dataset of 729 original chest radiographs, the labels of which were generated by the initial model and then screened. The cardiothoracic ratio of the two models was estimated in an independent test set of 120 original cases, and the results were compared to those obtained through manual measurement by four radiologists and the image-reading reports. RESULTS: The means and standard deviations of the cardiothoracic ratio were 52.4% and 9.8% for the initial model, 51.0% and 9.3% for the advanced model, and 49.8% and 9.4% for the total of four manual measurements, respectively. The intraclass correlation coefficients (ICCs) of the cardiothoracic ratio ranged from 0.91 to 0.93 between the advanced model and the manual measurements, whereas those for the initial model and the manual measurements ranged from 0.77 to 0.82. CONCLUSIONS: Deep learning-based cardiothoracic ratio estimation on chest radiographs correlated favorably with the results obtained through manual measurements by radiologists. When the model was trained on additional local images generated by the initial model, the correlation with manual measurement improved even more than the initial model alone. |
format | Online Article Text |
id | pubmed-10585545 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | AME Publishing Company |
record_format | MEDLINE/PubMed |
spelling | pubmed-105855452023-10-20 Deep learning-based cardiothoracic ratio measurement on chest radiograph: accuracy improvement without self-annotation Yoshida, Kotaro Takamatsu, Atsushi Matsubara, Takashi Kitagawa, Taichi Toshima, Fomihito Tanaka, Rie Gabata, Toshifumi Quant Imaging Med Surg Original Article BACKGROUND: A reproducible and accurate automated approach to measuring cardiothoracic ratio on chest radiographs is warranted. This study aimed to develop a deep learning-based model for estimating the cardiothoracic ratio on chest radiographs without requiring self-annotation and to compare its results with those of manual measurements. METHODS: The U-net architecture was designed to segment the right and left lungs and the cardiac shadow, from chest radiographs. The cardiothoracic ratio was then calculated using these labels by a mathematical algorithm. The initial model of deep learning-based cardiothoracic ratio measurement was developed using open-source 247 chest radiographs that had already been annotated. The advanced model was developed using a training dataset of 729 original chest radiographs, the labels of which were generated by the initial model and then screened. The cardiothoracic ratio of the two models was estimated in an independent test set of 120 original cases, and the results were compared to those obtained through manual measurement by four radiologists and the image-reading reports. RESULTS: The means and standard deviations of the cardiothoracic ratio were 52.4% and 9.8% for the initial model, 51.0% and 9.3% for the advanced model, and 49.8% and 9.4% for the total of four manual measurements, respectively. The intraclass correlation coefficients (ICCs) of the cardiothoracic ratio ranged from 0.91 to 0.93 between the advanced model and the manual measurements, whereas those for the initial model and the manual measurements ranged from 0.77 to 0.82. CONCLUSIONS: Deep learning-based cardiothoracic ratio estimation on chest radiographs correlated favorably with the results obtained through manual measurements by radiologists. When the model was trained on additional local images generated by the initial model, the correlation with manual measurement improved even more than the initial model alone. AME Publishing Company 2023-09-13 2023-10-01 /pmc/articles/PMC10585545/ /pubmed/37869343 http://dx.doi.org/10.21037/qims-23-187 Text en 2023 Quantitative Imaging in Medicine and Surgery. All rights reserved. https://creativecommons.org/licenses/by-nc-nd/4.0/Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0 (https://creativecommons.org/licenses/by-nc-nd/4.0/) . |
spellingShingle | Original Article Yoshida, Kotaro Takamatsu, Atsushi Matsubara, Takashi Kitagawa, Taichi Toshima, Fomihito Tanaka, Rie Gabata, Toshifumi Deep learning-based cardiothoracic ratio measurement on chest radiograph: accuracy improvement without self-annotation |
title | Deep learning-based cardiothoracic ratio measurement on chest radiograph: accuracy improvement without self-annotation |
title_full | Deep learning-based cardiothoracic ratio measurement on chest radiograph: accuracy improvement without self-annotation |
title_fullStr | Deep learning-based cardiothoracic ratio measurement on chest radiograph: accuracy improvement without self-annotation |
title_full_unstemmed | Deep learning-based cardiothoracic ratio measurement on chest radiograph: accuracy improvement without self-annotation |
title_short | Deep learning-based cardiothoracic ratio measurement on chest radiograph: accuracy improvement without self-annotation |
title_sort | deep learning-based cardiothoracic ratio measurement on chest radiograph: accuracy improvement without self-annotation |
topic | Original Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10585545/ https://www.ncbi.nlm.nih.gov/pubmed/37869343 http://dx.doi.org/10.21037/qims-23-187 |
work_keys_str_mv | AT yoshidakotaro deeplearningbasedcardiothoracicratiomeasurementonchestradiographaccuracyimprovementwithoutselfannotation AT takamatsuatsushi deeplearningbasedcardiothoracicratiomeasurementonchestradiographaccuracyimprovementwithoutselfannotation AT matsubaratakashi deeplearningbasedcardiothoracicratiomeasurementonchestradiographaccuracyimprovementwithoutselfannotation AT kitagawataichi deeplearningbasedcardiothoracicratiomeasurementonchestradiographaccuracyimprovementwithoutselfannotation AT toshimafomihito deeplearningbasedcardiothoracicratiomeasurementonchestradiographaccuracyimprovementwithoutselfannotation AT tanakarie deeplearningbasedcardiothoracicratiomeasurementonchestradiographaccuracyimprovementwithoutselfannotation AT gabatatoshifumi deeplearningbasedcardiothoracicratiomeasurementonchestradiographaccuracyimprovementwithoutselfannotation |