Cargando…
Privacy-preserving continual learning methods for medical image classification: a comparative analysis
BACKGROUND: The implementation of deep learning models for medical image classification poses significant challenges, including gradual performance degradation and limited adaptability to new diseases. However, frequent retraining of models is unfeasible and raises concerns about healthcare privacy...
Autores principales: | , , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10461441/ https://www.ncbi.nlm.nih.gov/pubmed/37644987 http://dx.doi.org/10.3389/fmed.2023.1227515 |
_version_ | 1785097840231448576 |
---|---|
author | Verma, Tanvi Jin, Liyuan Zhou, Jun Huang, Jia Tan, Mingrui Choong, Benjamin Chen Ming Tan, Ting Fang Gao, Fei Xu, Xinxing Ting, Daniel S. Liu, Yong |
author_facet | Verma, Tanvi Jin, Liyuan Zhou, Jun Huang, Jia Tan, Mingrui Choong, Benjamin Chen Ming Tan, Ting Fang Gao, Fei Xu, Xinxing Ting, Daniel S. Liu, Yong |
author_sort | Verma, Tanvi |
collection | PubMed |
description | BACKGROUND: The implementation of deep learning models for medical image classification poses significant challenges, including gradual performance degradation and limited adaptability to new diseases. However, frequent retraining of models is unfeasible and raises concerns about healthcare privacy due to the retention of prior patient data. To address these issues, this study investigated privacy-preserving continual learning methods as an alternative solution. METHODS: We evaluated twelve privacy-preserving non-storage continual learning algorithms based deep learning models for classifying retinal diseases from public optical coherence tomography (OCT) images, in a class-incremental learning scenario. The OCT dataset comprises 108,309 OCT images. Its classes include normal (47.21%), drusen (7.96%), choroidal neovascularization (CNV) (34.35%), and diabetic macular edema (DME) (10.48%). Each class consisted of 250 testing images. For continuous training, the first task involved CNV and normal classes, the second task focused on DME class, and the third task included drusen class. All selected algorithms were further experimented with different training sequence combinations. The final model's average class accuracy was measured. The performance of the joint model obtained through retraining and the original finetune model without continual learning algorithms were compared. Additionally, a publicly available medical dataset for colon cancer detection based on histology slides was selected as a proof of concept, while the CIFAR10 dataset was included as the continual learning benchmark. RESULTS: Among the continual learning algorithms, Brain-inspired-replay (BIR) outperformed the others in the continual learning-based classification of retinal diseases from OCT images, achieving an accuracy of 62.00% (95% confidence interval: 59.36-64.64%), with consistent top performance observed in different training sequences. For colon cancer histology classification, Efficient Feature Transformations (EFT) attained the highest accuracy of 66.82% (95% confidence interval: 64.23-69.42%). In comparison, the joint model achieved accuracies of 90.76% and 89.28%, respectively. The finetune model demonstrated catastrophic forgetting in both datasets. CONCLUSION: Although the joint retraining model exhibited superior performance, continual learning holds promise in mitigating catastrophic forgetting and facilitating continual model updates while preserving privacy in healthcare deep learning models. Thus, it presents a highly promising solution for the long-term clinical deployment of such models. |
format | Online Article Text |
id | pubmed-10461441 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-104614412023-08-29 Privacy-preserving continual learning methods for medical image classification: a comparative analysis Verma, Tanvi Jin, Liyuan Zhou, Jun Huang, Jia Tan, Mingrui Choong, Benjamin Chen Ming Tan, Ting Fang Gao, Fei Xu, Xinxing Ting, Daniel S. Liu, Yong Front Med (Lausanne) Medicine BACKGROUND: The implementation of deep learning models for medical image classification poses significant challenges, including gradual performance degradation and limited adaptability to new diseases. However, frequent retraining of models is unfeasible and raises concerns about healthcare privacy due to the retention of prior patient data. To address these issues, this study investigated privacy-preserving continual learning methods as an alternative solution. METHODS: We evaluated twelve privacy-preserving non-storage continual learning algorithms based deep learning models for classifying retinal diseases from public optical coherence tomography (OCT) images, in a class-incremental learning scenario. The OCT dataset comprises 108,309 OCT images. Its classes include normal (47.21%), drusen (7.96%), choroidal neovascularization (CNV) (34.35%), and diabetic macular edema (DME) (10.48%). Each class consisted of 250 testing images. For continuous training, the first task involved CNV and normal classes, the second task focused on DME class, and the third task included drusen class. All selected algorithms were further experimented with different training sequence combinations. The final model's average class accuracy was measured. The performance of the joint model obtained through retraining and the original finetune model without continual learning algorithms were compared. Additionally, a publicly available medical dataset for colon cancer detection based on histology slides was selected as a proof of concept, while the CIFAR10 dataset was included as the continual learning benchmark. RESULTS: Among the continual learning algorithms, Brain-inspired-replay (BIR) outperformed the others in the continual learning-based classification of retinal diseases from OCT images, achieving an accuracy of 62.00% (95% confidence interval: 59.36-64.64%), with consistent top performance observed in different training sequences. For colon cancer histology classification, Efficient Feature Transformations (EFT) attained the highest accuracy of 66.82% (95% confidence interval: 64.23-69.42%). In comparison, the joint model achieved accuracies of 90.76% and 89.28%, respectively. The finetune model demonstrated catastrophic forgetting in both datasets. CONCLUSION: Although the joint retraining model exhibited superior performance, continual learning holds promise in mitigating catastrophic forgetting and facilitating continual model updates while preserving privacy in healthcare deep learning models. Thus, it presents a highly promising solution for the long-term clinical deployment of such models. Frontiers Media S.A. 2023-08-14 /pmc/articles/PMC10461441/ /pubmed/37644987 http://dx.doi.org/10.3389/fmed.2023.1227515 Text en Copyright © 2023 Verma, Jin, Zhou, Huang, Tan, Choong, Tan, Gao, Xu, Ting and Liu. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Medicine Verma, Tanvi Jin, Liyuan Zhou, Jun Huang, Jia Tan, Mingrui Choong, Benjamin Chen Ming Tan, Ting Fang Gao, Fei Xu, Xinxing Ting, Daniel S. Liu, Yong Privacy-preserving continual learning methods for medical image classification: a comparative analysis |
title | Privacy-preserving continual learning methods for medical image classification: a comparative analysis |
title_full | Privacy-preserving continual learning methods for medical image classification: a comparative analysis |
title_fullStr | Privacy-preserving continual learning methods for medical image classification: a comparative analysis |
title_full_unstemmed | Privacy-preserving continual learning methods for medical image classification: a comparative analysis |
title_short | Privacy-preserving continual learning methods for medical image classification: a comparative analysis |
title_sort | privacy-preserving continual learning methods for medical image classification: a comparative analysis |
topic | Medicine |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10461441/ https://www.ncbi.nlm.nih.gov/pubmed/37644987 http://dx.doi.org/10.3389/fmed.2023.1227515 |
work_keys_str_mv | AT vermatanvi privacypreservingcontinuallearningmethodsformedicalimageclassificationacomparativeanalysis AT jinliyuan privacypreservingcontinuallearningmethodsformedicalimageclassificationacomparativeanalysis AT zhoujun privacypreservingcontinuallearningmethodsformedicalimageclassificationacomparativeanalysis AT huangjia privacypreservingcontinuallearningmethodsformedicalimageclassificationacomparativeanalysis AT tanmingrui privacypreservingcontinuallearningmethodsformedicalimageclassificationacomparativeanalysis AT choongbenjaminchenming privacypreservingcontinuallearningmethodsformedicalimageclassificationacomparativeanalysis AT tantingfang privacypreservingcontinuallearningmethodsformedicalimageclassificationacomparativeanalysis AT gaofei privacypreservingcontinuallearningmethodsformedicalimageclassificationacomparativeanalysis AT xuxinxing privacypreservingcontinuallearningmethodsformedicalimageclassificationacomparativeanalysis AT tingdaniels privacypreservingcontinuallearningmethodsformedicalimageclassificationacomparativeanalysis AT liuyong privacypreservingcontinuallearningmethodsformedicalimageclassificationacomparativeanalysis |