Cargando…

Improving Interpretability in Machine Diagnosis: Detection of Geographic Atrophy in OCT Scans

PURPOSE: Manually identifying geographic atrophy (GA) presence and location on OCT volume scans can be challenging and time consuming. This study developed a deep learning model simultaneously (1) to perform automated detection of GA presence or absence from OCT volume scans and (2) to provide inter...

Descripción completa

Detalles Bibliográficos
Autores principales: Shi, Xiaoshuang, Keenan, Tiarnan D.L., Chen, Qingyu, De Silva, Tharindu, Thavikulwat, Alisa T., Broadhead, Geoffrey, Bhandari, Sanjeeb, Cukras, Catherine, Chew, Emily Y., Lu, Zhiyong
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9559084/
https://www.ncbi.nlm.nih.gov/pubmed/36247813
http://dx.doi.org/10.1016/j.xops.2021.100038
_version_ 1784807583888965632
author Shi, Xiaoshuang
Keenan, Tiarnan D.L.
Chen, Qingyu
De Silva, Tharindu
Thavikulwat, Alisa T.
Broadhead, Geoffrey
Bhandari, Sanjeeb
Cukras, Catherine
Chew, Emily Y.
Lu, Zhiyong
author_facet Shi, Xiaoshuang
Keenan, Tiarnan D.L.
Chen, Qingyu
De Silva, Tharindu
Thavikulwat, Alisa T.
Broadhead, Geoffrey
Bhandari, Sanjeeb
Cukras, Catherine
Chew, Emily Y.
Lu, Zhiyong
author_sort Shi, Xiaoshuang
collection PubMed
description PURPOSE: Manually identifying geographic atrophy (GA) presence and location on OCT volume scans can be challenging and time consuming. This study developed a deep learning model simultaneously (1) to perform automated detection of GA presence or absence from OCT volume scans and (2) to provide interpretability by demonstrating which regions of which B-scans show GA. DESIGN: Med-XAI-Net, an interpretable deep learning model was developed to detect GA presence or absence from OCT volume scans using only volume scan labels, as well as to interpret the most relevant B-scans and B-scan regions. PARTICIPANTS: One thousand two hundred eighty-four OCT volume scans (each containing 100 B-scans) from 311 participants, including 321 volumes with GA and 963 volumes without GA. METHODS: Med-XAI-Net simulates the human diagnostic process by using a region-attention module to locate the most relevant region in each B-scan, followed by an image-attention module to select the most relevant B-scans for classifying GA presence or absence in each OCT volume scan. Med-XAI-Net was trained and tested (80% and 20% participants, respectively) using gold standard volume scan labels from human expert graders. MAIN OUTCOME MEASURES: Accuracy, area under the receiver operating characteristic (ROC) curve, F(1) score, sensitivity, and specificity. RESULTS: In the detection of GA presence or absence, Med-XAI-Net obtained superior performance (91.5%, 93.5%, 82.3%, 82.8%, and 94.6% on accuracy, area under the ROC curve, F(1) score, sensitivity, and specificity, respectively) to that of 2 other state-of-the-art deep learning methods. The performance of ophthalmologists grading only the 5 B-scans selected by Med-XAI-Net as most relevant (95.7%, 95.4%, 91.2%, and 100%, respectively) was almost identical to that of ophthalmologists grading all volume scans (96.0%, 95.7%, 91.8%, and 100%, respectively). Even grading only 1 region in 1 B-scan, the ophthalmologists demonstrated moderately high performance (89.0%, 87.4%, 77.6%, and 100%, respectively). CONCLUSIONS: Despite using ground truth labels during training at the volume scan level only, Med-XAI-Net was effective in locating GA in B-scans and selecting relevant B-scans within each volume scan for GA diagnosis. These results illustrate the strengths of Med-XAI-Net in interpreting which regions and B-scans contribute to GA detection in the volume scan.
format Online
Article
Text
id pubmed-9559084
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-95590842022-10-14 Improving Interpretability in Machine Diagnosis: Detection of Geographic Atrophy in OCT Scans Shi, Xiaoshuang Keenan, Tiarnan D.L. Chen, Qingyu De Silva, Tharindu Thavikulwat, Alisa T. Broadhead, Geoffrey Bhandari, Sanjeeb Cukras, Catherine Chew, Emily Y. Lu, Zhiyong Ophthalmol Sci Original Article PURPOSE: Manually identifying geographic atrophy (GA) presence and location on OCT volume scans can be challenging and time consuming. This study developed a deep learning model simultaneously (1) to perform automated detection of GA presence or absence from OCT volume scans and (2) to provide interpretability by demonstrating which regions of which B-scans show GA. DESIGN: Med-XAI-Net, an interpretable deep learning model was developed to detect GA presence or absence from OCT volume scans using only volume scan labels, as well as to interpret the most relevant B-scans and B-scan regions. PARTICIPANTS: One thousand two hundred eighty-four OCT volume scans (each containing 100 B-scans) from 311 participants, including 321 volumes with GA and 963 volumes without GA. METHODS: Med-XAI-Net simulates the human diagnostic process by using a region-attention module to locate the most relevant region in each B-scan, followed by an image-attention module to select the most relevant B-scans for classifying GA presence or absence in each OCT volume scan. Med-XAI-Net was trained and tested (80% and 20% participants, respectively) using gold standard volume scan labels from human expert graders. MAIN OUTCOME MEASURES: Accuracy, area under the receiver operating characteristic (ROC) curve, F(1) score, sensitivity, and specificity. RESULTS: In the detection of GA presence or absence, Med-XAI-Net obtained superior performance (91.5%, 93.5%, 82.3%, 82.8%, and 94.6% on accuracy, area under the ROC curve, F(1) score, sensitivity, and specificity, respectively) to that of 2 other state-of-the-art deep learning methods. The performance of ophthalmologists grading only the 5 B-scans selected by Med-XAI-Net as most relevant (95.7%, 95.4%, 91.2%, and 100%, respectively) was almost identical to that of ophthalmologists grading all volume scans (96.0%, 95.7%, 91.8%, and 100%, respectively). Even grading only 1 region in 1 B-scan, the ophthalmologists demonstrated moderately high performance (89.0%, 87.4%, 77.6%, and 100%, respectively). CONCLUSIONS: Despite using ground truth labels during training at the volume scan level only, Med-XAI-Net was effective in locating GA in B-scans and selecting relevant B-scans within each volume scan for GA diagnosis. These results illustrate the strengths of Med-XAI-Net in interpreting which regions and B-scans contribute to GA detection in the volume scan. Elsevier 2021-07-13 /pmc/articles/PMC9559084/ /pubmed/36247813 http://dx.doi.org/10.1016/j.xops.2021.100038 Text en https://creativecommons.org/licenses/by-nc-nd/4.0/This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
spellingShingle Original Article
Shi, Xiaoshuang
Keenan, Tiarnan D.L.
Chen, Qingyu
De Silva, Tharindu
Thavikulwat, Alisa T.
Broadhead, Geoffrey
Bhandari, Sanjeeb
Cukras, Catherine
Chew, Emily Y.
Lu, Zhiyong
Improving Interpretability in Machine Diagnosis: Detection of Geographic Atrophy in OCT Scans
title Improving Interpretability in Machine Diagnosis: Detection of Geographic Atrophy in OCT Scans
title_full Improving Interpretability in Machine Diagnosis: Detection of Geographic Atrophy in OCT Scans
title_fullStr Improving Interpretability in Machine Diagnosis: Detection of Geographic Atrophy in OCT Scans
title_full_unstemmed Improving Interpretability in Machine Diagnosis: Detection of Geographic Atrophy in OCT Scans
title_short Improving Interpretability in Machine Diagnosis: Detection of Geographic Atrophy in OCT Scans
title_sort improving interpretability in machine diagnosis: detection of geographic atrophy in oct scans
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9559084/
https://www.ncbi.nlm.nih.gov/pubmed/36247813
http://dx.doi.org/10.1016/j.xops.2021.100038
work_keys_str_mv AT shixiaoshuang improvinginterpretabilityinmachinediagnosisdetectionofgeographicatrophyinoctscans
AT keenantiarnandl improvinginterpretabilityinmachinediagnosisdetectionofgeographicatrophyinoctscans
AT chenqingyu improvinginterpretabilityinmachinediagnosisdetectionofgeographicatrophyinoctscans
AT desilvatharindu improvinginterpretabilityinmachinediagnosisdetectionofgeographicatrophyinoctscans
AT thavikulwatalisat improvinginterpretabilityinmachinediagnosisdetectionofgeographicatrophyinoctscans
AT broadheadgeoffrey improvinginterpretabilityinmachinediagnosisdetectionofgeographicatrophyinoctscans
AT bhandarisanjeeb improvinginterpretabilityinmachinediagnosisdetectionofgeographicatrophyinoctscans
AT cukrascatherine improvinginterpretabilityinmachinediagnosisdetectionofgeographicatrophyinoctscans
AT chewemilyy improvinginterpretabilityinmachinediagnosisdetectionofgeographicatrophyinoctscans
AT luzhiyong improvinginterpretabilityinmachinediagnosisdetectionofgeographicatrophyinoctscans