Cargando…

A Comparative Study of Multiple Deep Learning Models Based on Multi-Input Resolution for Breast Ultrasound Images

PURPOSE: The purpose of this study was to explore the performance of different parameter combinations of deep learning (DL) models (Xception, DenseNet121, MobileNet, ResNet50 and EfficientNetB0) and input image resolutions (REZs) (224 × 224, 320 × 320 and 488 × 488 pixels) for breast cancer diagnosi...

Descripción completa

Detalles Bibliográficos
Autores principales: Wu, Huaiyu, Ye, Xiuqin, Jiang, Yitao, Tian, Hongtian, Yang, Keen, Cui, Chen, Shi, Siyuan, Liu, Yan, Huang, Sijing, Chen, Jing, Xu, Jinfeng, Dong, Fajin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9302001/
https://www.ncbi.nlm.nih.gov/pubmed/35875151
http://dx.doi.org/10.3389/fonc.2022.869421
_version_ 1784751541359476736
author Wu, Huaiyu
Ye, Xiuqin
Jiang, Yitao
Tian, Hongtian
Yang, Keen
Cui, Chen
Shi, Siyuan
Liu, Yan
Huang, Sijing
Chen, Jing
Xu, Jinfeng
Dong, Fajin
author_facet Wu, Huaiyu
Ye, Xiuqin
Jiang, Yitao
Tian, Hongtian
Yang, Keen
Cui, Chen
Shi, Siyuan
Liu, Yan
Huang, Sijing
Chen, Jing
Xu, Jinfeng
Dong, Fajin
author_sort Wu, Huaiyu
collection PubMed
description PURPOSE: The purpose of this study was to explore the performance of different parameter combinations of deep learning (DL) models (Xception, DenseNet121, MobileNet, ResNet50 and EfficientNetB0) and input image resolutions (REZs) (224 × 224, 320 × 320 and 488 × 488 pixels) for breast cancer diagnosis. METHODS: This multicenter study retrospectively studied gray-scale ultrasound breast images enrolled from two Chinese hospitals. The data are divided into training, validation, internal testing and external testing set. Three-hundreds images were randomly selected for the physician-AI comparison. The Wilcoxon test was used to compare the diagnose error of physicians and models under P=0.05 and 0.10 significance level. The specificity, sensitivity, accuracy, area under the curve (AUC) were used as primary evaluation metrics. RESULTS: A total of 13,684 images of 3447 female patients are finally included. In external test the 224 and 320 REZ achieve the best performance in MobileNet and EfficientNetB0 respectively (AUC: 0.893 and 0.907). Meanwhile, 448 REZ achieve the best performance in Xception, DenseNet121 and ResNet50 (AUC: 0.900, 0.883 and 0.871 respectively). In physician-AI test set, the 320 REZ for EfficientNetB0 (AUC: 0.896, P < 0.1) is better than senior physicians. Besides, the 224 REZ for MobileNet (AUC: 0.878, P < 0.1), 448 REZ for Xception (AUC: 0.895, P < 0.1) are better than junior physicians. While the 448 REZ for DenseNet121 (AUC: 0.880, P < 0.05) and ResNet50 (AUC: 0.838, P < 0.05) are only better than entry physicians. CONCLUSION: Based on the gray-scale ultrasound breast images, we obtained the best DL combination which was better than the physicians.
format Online
Article
Text
id pubmed-9302001
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-93020012022-07-22 A Comparative Study of Multiple Deep Learning Models Based on Multi-Input Resolution for Breast Ultrasound Images Wu, Huaiyu Ye, Xiuqin Jiang, Yitao Tian, Hongtian Yang, Keen Cui, Chen Shi, Siyuan Liu, Yan Huang, Sijing Chen, Jing Xu, Jinfeng Dong, Fajin Front Oncol Oncology PURPOSE: The purpose of this study was to explore the performance of different parameter combinations of deep learning (DL) models (Xception, DenseNet121, MobileNet, ResNet50 and EfficientNetB0) and input image resolutions (REZs) (224 × 224, 320 × 320 and 488 × 488 pixels) for breast cancer diagnosis. METHODS: This multicenter study retrospectively studied gray-scale ultrasound breast images enrolled from two Chinese hospitals. The data are divided into training, validation, internal testing and external testing set. Three-hundreds images were randomly selected for the physician-AI comparison. The Wilcoxon test was used to compare the diagnose error of physicians and models under P=0.05 and 0.10 significance level. The specificity, sensitivity, accuracy, area under the curve (AUC) were used as primary evaluation metrics. RESULTS: A total of 13,684 images of 3447 female patients are finally included. In external test the 224 and 320 REZ achieve the best performance in MobileNet and EfficientNetB0 respectively (AUC: 0.893 and 0.907). Meanwhile, 448 REZ achieve the best performance in Xception, DenseNet121 and ResNet50 (AUC: 0.900, 0.883 and 0.871 respectively). In physician-AI test set, the 320 REZ for EfficientNetB0 (AUC: 0.896, P < 0.1) is better than senior physicians. Besides, the 224 REZ for MobileNet (AUC: 0.878, P < 0.1), 448 REZ for Xception (AUC: 0.895, P < 0.1) are better than junior physicians. While the 448 REZ for DenseNet121 (AUC: 0.880, P < 0.05) and ResNet50 (AUC: 0.838, P < 0.05) are only better than entry physicians. CONCLUSION: Based on the gray-scale ultrasound breast images, we obtained the best DL combination which was better than the physicians. Frontiers Media S.A. 2022-07-07 /pmc/articles/PMC9302001/ /pubmed/35875151 http://dx.doi.org/10.3389/fonc.2022.869421 Text en Copyright © 2022 Wu, Ye, Jiang, Tian, Yang, Cui, Shi, Liu, Huang, Chen, Xu and Dong https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Oncology
Wu, Huaiyu
Ye, Xiuqin
Jiang, Yitao
Tian, Hongtian
Yang, Keen
Cui, Chen
Shi, Siyuan
Liu, Yan
Huang, Sijing
Chen, Jing
Xu, Jinfeng
Dong, Fajin
A Comparative Study of Multiple Deep Learning Models Based on Multi-Input Resolution for Breast Ultrasound Images
title A Comparative Study of Multiple Deep Learning Models Based on Multi-Input Resolution for Breast Ultrasound Images
title_full A Comparative Study of Multiple Deep Learning Models Based on Multi-Input Resolution for Breast Ultrasound Images
title_fullStr A Comparative Study of Multiple Deep Learning Models Based on Multi-Input Resolution for Breast Ultrasound Images
title_full_unstemmed A Comparative Study of Multiple Deep Learning Models Based on Multi-Input Resolution for Breast Ultrasound Images
title_short A Comparative Study of Multiple Deep Learning Models Based on Multi-Input Resolution for Breast Ultrasound Images
title_sort comparative study of multiple deep learning models based on multi-input resolution for breast ultrasound images
topic Oncology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9302001/
https://www.ncbi.nlm.nih.gov/pubmed/35875151
http://dx.doi.org/10.3389/fonc.2022.869421
work_keys_str_mv AT wuhuaiyu acomparativestudyofmultipledeeplearningmodelsbasedonmultiinputresolutionforbreastultrasoundimages
AT yexiuqin acomparativestudyofmultipledeeplearningmodelsbasedonmultiinputresolutionforbreastultrasoundimages
AT jiangyitao acomparativestudyofmultipledeeplearningmodelsbasedonmultiinputresolutionforbreastultrasoundimages
AT tianhongtian acomparativestudyofmultipledeeplearningmodelsbasedonmultiinputresolutionforbreastultrasoundimages
AT yangkeen acomparativestudyofmultipledeeplearningmodelsbasedonmultiinputresolutionforbreastultrasoundimages
AT cuichen acomparativestudyofmultipledeeplearningmodelsbasedonmultiinputresolutionforbreastultrasoundimages
AT shisiyuan acomparativestudyofmultipledeeplearningmodelsbasedonmultiinputresolutionforbreastultrasoundimages
AT liuyan acomparativestudyofmultipledeeplearningmodelsbasedonmultiinputresolutionforbreastultrasoundimages
AT huangsijing acomparativestudyofmultipledeeplearningmodelsbasedonmultiinputresolutionforbreastultrasoundimages
AT chenjing acomparativestudyofmultipledeeplearningmodelsbasedonmultiinputresolutionforbreastultrasoundimages
AT xujinfeng acomparativestudyofmultipledeeplearningmodelsbasedonmultiinputresolutionforbreastultrasoundimages
AT dongfajin acomparativestudyofmultipledeeplearningmodelsbasedonmultiinputresolutionforbreastultrasoundimages
AT wuhuaiyu comparativestudyofmultipledeeplearningmodelsbasedonmultiinputresolutionforbreastultrasoundimages
AT yexiuqin comparativestudyofmultipledeeplearningmodelsbasedonmultiinputresolutionforbreastultrasoundimages
AT jiangyitao comparativestudyofmultipledeeplearningmodelsbasedonmultiinputresolutionforbreastultrasoundimages
AT tianhongtian comparativestudyofmultipledeeplearningmodelsbasedonmultiinputresolutionforbreastultrasoundimages
AT yangkeen comparativestudyofmultipledeeplearningmodelsbasedonmultiinputresolutionforbreastultrasoundimages
AT cuichen comparativestudyofmultipledeeplearningmodelsbasedonmultiinputresolutionforbreastultrasoundimages
AT shisiyuan comparativestudyofmultipledeeplearningmodelsbasedonmultiinputresolutionforbreastultrasoundimages
AT liuyan comparativestudyofmultipledeeplearningmodelsbasedonmultiinputresolutionforbreastultrasoundimages
AT huangsijing comparativestudyofmultipledeeplearningmodelsbasedonmultiinputresolutionforbreastultrasoundimages
AT chenjing comparativestudyofmultipledeeplearningmodelsbasedonmultiinputresolutionforbreastultrasoundimages
AT xujinfeng comparativestudyofmultipledeeplearningmodelsbasedonmultiinputresolutionforbreastultrasoundimages
AT dongfajin comparativestudyofmultipledeeplearningmodelsbasedonmultiinputresolutionforbreastultrasoundimages