Cargando…
Automatic deep learning-based consolidation/collapse classification in lung ultrasound images for COVID-19 induced pneumonia
Our automated deep learning-based approach identifies consolidation/collapse in LUS images to aid in the identification of late stages of COVID-19 induced pneumonia, where consolidation/collapse is one of the possible associated pathologies. A common challenge in training such models is that annotat...
Autores principales: | , , , , , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9584232/ https://www.ncbi.nlm.nih.gov/pubmed/36266463 http://dx.doi.org/10.1038/s41598-022-22196-y |
_version_ | 1784813218132131840 |
---|---|
author | Durrani, Nabeel Vukovic, Damjan van der Burgt, Jeroen Antico, Maria van Sloun, Ruud J. G. Canty, David Steffens, Marian Wang, Andrew Royse, Alistair Royse, Colin Haji, Kavi Dowling, Jason Chetty, Girija Fontanarosa, Davide |
author_facet | Durrani, Nabeel Vukovic, Damjan van der Burgt, Jeroen Antico, Maria van Sloun, Ruud J. G. Canty, David Steffens, Marian Wang, Andrew Royse, Alistair Royse, Colin Haji, Kavi Dowling, Jason Chetty, Girija Fontanarosa, Davide |
author_sort | Durrani, Nabeel |
collection | PubMed |
description | Our automated deep learning-based approach identifies consolidation/collapse in LUS images to aid in the identification of late stages of COVID-19 induced pneumonia, where consolidation/collapse is one of the possible associated pathologies. A common challenge in training such models is that annotating each frame of an ultrasound video requires high labelling effort. This effort in practice becomes prohibitive for large ultrasound datasets. To understand the impact of various degrees of labelling precision, we compare labelling strategies to train fully supervised models (frame-based method, higher labelling effort) and inaccurately supervised models (video-based methods, lower labelling effort), both of which yield binary predictions for LUS videos on a frame-by-frame level. We moreover introduce a novel sampled quaternary method which randomly samples only 10% of the LUS video frames and subsequently assigns (ordinal) categorical labels to all frames in the video based on the fraction of positively annotated samples. This method outperformed the inaccurately supervised video-based method and more surprisingly, the supervised frame-based approach with respect to metrics such as precision-recall area under curve (PR-AUC) and F1 score, despite being a form of inaccurate learning. We argue that our video-based method is more robust with respect to label noise and mitigates overfitting in a manner similar to label smoothing. The algorithm was trained using a ten-fold cross validation, which resulted in a PR-AUC score of 73% and an accuracy of 89%. While the efficacy of our classifier using the sampled quaternary method significantly lowers the labelling effort, it must be verified on a larger consolidation/collapse dataset, our proposed classifier using the sampled quaternary video-based method is clinically comparable with trained experts’ performance. |
format | Online Article Text |
id | pubmed-9584232 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-95842322022-10-21 Automatic deep learning-based consolidation/collapse classification in lung ultrasound images for COVID-19 induced pneumonia Durrani, Nabeel Vukovic, Damjan van der Burgt, Jeroen Antico, Maria van Sloun, Ruud J. G. Canty, David Steffens, Marian Wang, Andrew Royse, Alistair Royse, Colin Haji, Kavi Dowling, Jason Chetty, Girija Fontanarosa, Davide Sci Rep Article Our automated deep learning-based approach identifies consolidation/collapse in LUS images to aid in the identification of late stages of COVID-19 induced pneumonia, where consolidation/collapse is one of the possible associated pathologies. A common challenge in training such models is that annotating each frame of an ultrasound video requires high labelling effort. This effort in practice becomes prohibitive for large ultrasound datasets. To understand the impact of various degrees of labelling precision, we compare labelling strategies to train fully supervised models (frame-based method, higher labelling effort) and inaccurately supervised models (video-based methods, lower labelling effort), both of which yield binary predictions for LUS videos on a frame-by-frame level. We moreover introduce a novel sampled quaternary method which randomly samples only 10% of the LUS video frames and subsequently assigns (ordinal) categorical labels to all frames in the video based on the fraction of positively annotated samples. This method outperformed the inaccurately supervised video-based method and more surprisingly, the supervised frame-based approach with respect to metrics such as precision-recall area under curve (PR-AUC) and F1 score, despite being a form of inaccurate learning. We argue that our video-based method is more robust with respect to label noise and mitigates overfitting in a manner similar to label smoothing. The algorithm was trained using a ten-fold cross validation, which resulted in a PR-AUC score of 73% and an accuracy of 89%. While the efficacy of our classifier using the sampled quaternary method significantly lowers the labelling effort, it must be verified on a larger consolidation/collapse dataset, our proposed classifier using the sampled quaternary video-based method is clinically comparable with trained experts’ performance. Nature Publishing Group UK 2022-10-20 /pmc/articles/PMC9584232/ /pubmed/36266463 http://dx.doi.org/10.1038/s41598-022-22196-y Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Durrani, Nabeel Vukovic, Damjan van der Burgt, Jeroen Antico, Maria van Sloun, Ruud J. G. Canty, David Steffens, Marian Wang, Andrew Royse, Alistair Royse, Colin Haji, Kavi Dowling, Jason Chetty, Girija Fontanarosa, Davide Automatic deep learning-based consolidation/collapse classification in lung ultrasound images for COVID-19 induced pneumonia |
title | Automatic deep learning-based consolidation/collapse classification in lung ultrasound images for COVID-19 induced pneumonia |
title_full | Automatic deep learning-based consolidation/collapse classification in lung ultrasound images for COVID-19 induced pneumonia |
title_fullStr | Automatic deep learning-based consolidation/collapse classification in lung ultrasound images for COVID-19 induced pneumonia |
title_full_unstemmed | Automatic deep learning-based consolidation/collapse classification in lung ultrasound images for COVID-19 induced pneumonia |
title_short | Automatic deep learning-based consolidation/collapse classification in lung ultrasound images for COVID-19 induced pneumonia |
title_sort | automatic deep learning-based consolidation/collapse classification in lung ultrasound images for covid-19 induced pneumonia |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9584232/ https://www.ncbi.nlm.nih.gov/pubmed/36266463 http://dx.doi.org/10.1038/s41598-022-22196-y |
work_keys_str_mv | AT durraninabeel automaticdeeplearningbasedconsolidationcollapseclassificationinlungultrasoundimagesforcovid19inducedpneumonia AT vukovicdamjan automaticdeeplearningbasedconsolidationcollapseclassificationinlungultrasoundimagesforcovid19inducedpneumonia AT vanderburgtjeroen automaticdeeplearningbasedconsolidationcollapseclassificationinlungultrasoundimagesforcovid19inducedpneumonia AT anticomaria automaticdeeplearningbasedconsolidationcollapseclassificationinlungultrasoundimagesforcovid19inducedpneumonia AT vanslounruudjg automaticdeeplearningbasedconsolidationcollapseclassificationinlungultrasoundimagesforcovid19inducedpneumonia AT cantydavid automaticdeeplearningbasedconsolidationcollapseclassificationinlungultrasoundimagesforcovid19inducedpneumonia AT steffensmarian automaticdeeplearningbasedconsolidationcollapseclassificationinlungultrasoundimagesforcovid19inducedpneumonia AT wangandrew automaticdeeplearningbasedconsolidationcollapseclassificationinlungultrasoundimagesforcovid19inducedpneumonia AT roysealistair automaticdeeplearningbasedconsolidationcollapseclassificationinlungultrasoundimagesforcovid19inducedpneumonia AT roysecolin automaticdeeplearningbasedconsolidationcollapseclassificationinlungultrasoundimagesforcovid19inducedpneumonia AT hajikavi automaticdeeplearningbasedconsolidationcollapseclassificationinlungultrasoundimagesforcovid19inducedpneumonia AT dowlingjason automaticdeeplearningbasedconsolidationcollapseclassificationinlungultrasoundimagesforcovid19inducedpneumonia AT chettygirija automaticdeeplearningbasedconsolidationcollapseclassificationinlungultrasoundimagesforcovid19inducedpneumonia AT fontanarosadavide automaticdeeplearningbasedconsolidationcollapseclassificationinlungultrasoundimagesforcovid19inducedpneumonia |