Cargando…
Generalization of vision pre-trained models for histopathology
Out-of-distribution (OOD) generalization, especially for medical setups, is a key challenge in modern machine learning which has only recently received much attention. We investigate how different convolutional pre-trained models perform on OOD test data—that is data from domains that have not been...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10102232/ https://www.ncbi.nlm.nih.gov/pubmed/37055519 http://dx.doi.org/10.1038/s41598-023-33348-z |
_version_ | 1785025653286895616 |
---|---|
author | Sikaroudi, Milad Hosseini, Maryam Gonzalez, Ricardo Rahnamayan, Shahryar Tizhoosh, H. R. |
author_facet | Sikaroudi, Milad Hosseini, Maryam Gonzalez, Ricardo Rahnamayan, Shahryar Tizhoosh, H. R. |
author_sort | Sikaroudi, Milad |
collection | PubMed |
description | Out-of-distribution (OOD) generalization, especially for medical setups, is a key challenge in modern machine learning which has only recently received much attention. We investigate how different convolutional pre-trained models perform on OOD test data—that is data from domains that have not been seen during training—on histopathology repositories attributed to different trial sites. Different trial site repositories, pre-trained models, and image transformations are examined as specific aspects of pre-trained models. A comparison is also performed among models trained entirely from scratch (i.e., without pre-training) and models already pre-trained. The OOD performance of pre-trained models on natural images, i.e., (1) vanilla pre-trained ImageNet, (2) semi-supervised learning (SSL), and (3) semi-weakly-supervised learning (SWSL) models pre-trained on IG-1B-Targeted are examined in this study. In addition, the performance of a histopathology model (i.e., KimiaNet) trained on the most comprehensive histopathology dataset, i.e., TCGA, has also been studied. Although the performance of SSL and SWSL pre-trained models are conducive to better OOD performance in comparison to the vanilla ImageNet pre-trained model, the histopathology pre-trained model is still the best in overall. In terms of top-1 accuracy, we demonstrate that diversifying the images in the training using reasonable image transformations is effective to avoid learning shortcuts when the distribution shift is significant. In addition, XAI techniques—which aim to achieve high-quality human-understandable explanations of AI decisions—are leveraged for further investigations. |
format | Online Article Text |
id | pubmed-10102232 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-101022322023-04-15 Generalization of vision pre-trained models for histopathology Sikaroudi, Milad Hosseini, Maryam Gonzalez, Ricardo Rahnamayan, Shahryar Tizhoosh, H. R. Sci Rep Article Out-of-distribution (OOD) generalization, especially for medical setups, is a key challenge in modern machine learning which has only recently received much attention. We investigate how different convolutional pre-trained models perform on OOD test data—that is data from domains that have not been seen during training—on histopathology repositories attributed to different trial sites. Different trial site repositories, pre-trained models, and image transformations are examined as specific aspects of pre-trained models. A comparison is also performed among models trained entirely from scratch (i.e., without pre-training) and models already pre-trained. The OOD performance of pre-trained models on natural images, i.e., (1) vanilla pre-trained ImageNet, (2) semi-supervised learning (SSL), and (3) semi-weakly-supervised learning (SWSL) models pre-trained on IG-1B-Targeted are examined in this study. In addition, the performance of a histopathology model (i.e., KimiaNet) trained on the most comprehensive histopathology dataset, i.e., TCGA, has also been studied. Although the performance of SSL and SWSL pre-trained models are conducive to better OOD performance in comparison to the vanilla ImageNet pre-trained model, the histopathology pre-trained model is still the best in overall. In terms of top-1 accuracy, we demonstrate that diversifying the images in the training using reasonable image transformations is effective to avoid learning shortcuts when the distribution shift is significant. In addition, XAI techniques—which aim to achieve high-quality human-understandable explanations of AI decisions—are leveraged for further investigations. Nature Publishing Group UK 2023-04-13 /pmc/articles/PMC10102232/ /pubmed/37055519 http://dx.doi.org/10.1038/s41598-023-33348-z Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Sikaroudi, Milad Hosseini, Maryam Gonzalez, Ricardo Rahnamayan, Shahryar Tizhoosh, H. R. Generalization of vision pre-trained models for histopathology |
title | Generalization of vision pre-trained models for histopathology |
title_full | Generalization of vision pre-trained models for histopathology |
title_fullStr | Generalization of vision pre-trained models for histopathology |
title_full_unstemmed | Generalization of vision pre-trained models for histopathology |
title_short | Generalization of vision pre-trained models for histopathology |
title_sort | generalization of vision pre-trained models for histopathology |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10102232/ https://www.ncbi.nlm.nih.gov/pubmed/37055519 http://dx.doi.org/10.1038/s41598-023-33348-z |
work_keys_str_mv | AT sikaroudimilad generalizationofvisionpretrainedmodelsforhistopathology AT hosseinimaryam generalizationofvisionpretrainedmodelsforhistopathology AT gonzalezricardo generalizationofvisionpretrainedmodelsforhistopathology AT rahnamayanshahryar generalizationofvisionpretrainedmodelsforhistopathology AT tizhooshhr generalizationofvisionpretrainedmodelsforhistopathology |