Cargando…

Can Machine Learning Be Better than Biased Readers?

Background: Training machine learning (ML) models in medical imaging requires large amounts of labeled data. To minimize labeling workload, it is common to divide training data among multiple readers for separate annotation without consensus and then combine the labeled data for training a ML model....

Descripción completa

Detalles Bibliográficos
Autores principales: Hibi, Atsuhiro, Zhu, Rui, Tyrrell, Pascal N.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10204355/
https://www.ncbi.nlm.nih.gov/pubmed/37218934
http://dx.doi.org/10.3390/tomography9030074
_version_ 1785045814391865344
author Hibi, Atsuhiro
Zhu, Rui
Tyrrell, Pascal N.
author_facet Hibi, Atsuhiro
Zhu, Rui
Tyrrell, Pascal N.
author_sort Hibi, Atsuhiro
collection PubMed
description Background: Training machine learning (ML) models in medical imaging requires large amounts of labeled data. To minimize labeling workload, it is common to divide training data among multiple readers for separate annotation without consensus and then combine the labeled data for training a ML model. This can lead to a biased training dataset and poor ML algorithm prediction performance. The purpose of this study is to determine if ML algorithms can overcome biases caused by multiple readers’ labeling without consensus. Methods: This study used a publicly available chest X-ray dataset of pediatric pneumonia. As an analogy to a practical dataset without labeling consensus among multiple readers, random and systematic errors were artificially added to the dataset to generate biased data for a binary-class classification task. The Resnet18-based convolutional neural network (CNN) was used as a baseline model. A Resnet18 model with a regularization term added as a loss function was utilized to examine for improvement in the baseline model. Results: The effects of false positive labels, false negative labels, and random errors (5–25%) resulted in a loss of AUC (0–14%) when training a binary CNN classifier. The model with a regularized loss function improved the AUC (75–84%) over that of the baseline model (65–79%). Conclusion: This study indicated that it is possible for ML algorithms to overcome individual readers’ biases when consensus is not available. It is recommended to use regularized loss functions when allocating annotation tasks to multiple readers as they are easy to implement and effective in mitigating biased labels.
format Online
Article
Text
id pubmed-10204355
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-102043552023-05-24 Can Machine Learning Be Better than Biased Readers? Hibi, Atsuhiro Zhu, Rui Tyrrell, Pascal N. Tomography Article Background: Training machine learning (ML) models in medical imaging requires large amounts of labeled data. To minimize labeling workload, it is common to divide training data among multiple readers for separate annotation without consensus and then combine the labeled data for training a ML model. This can lead to a biased training dataset and poor ML algorithm prediction performance. The purpose of this study is to determine if ML algorithms can overcome biases caused by multiple readers’ labeling without consensus. Methods: This study used a publicly available chest X-ray dataset of pediatric pneumonia. As an analogy to a practical dataset without labeling consensus among multiple readers, random and systematic errors were artificially added to the dataset to generate biased data for a binary-class classification task. The Resnet18-based convolutional neural network (CNN) was used as a baseline model. A Resnet18 model with a regularization term added as a loss function was utilized to examine for improvement in the baseline model. Results: The effects of false positive labels, false negative labels, and random errors (5–25%) resulted in a loss of AUC (0–14%) when training a binary CNN classifier. The model with a regularized loss function improved the AUC (75–84%) over that of the baseline model (65–79%). Conclusion: This study indicated that it is possible for ML algorithms to overcome individual readers’ biases when consensus is not available. It is recommended to use regularized loss functions when allocating annotation tasks to multiple readers as they are easy to implement and effective in mitigating biased labels. MDPI 2023-04-28 /pmc/articles/PMC10204355/ /pubmed/37218934 http://dx.doi.org/10.3390/tomography9030074 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Hibi, Atsuhiro
Zhu, Rui
Tyrrell, Pascal N.
Can Machine Learning Be Better than Biased Readers?
title Can Machine Learning Be Better than Biased Readers?
title_full Can Machine Learning Be Better than Biased Readers?
title_fullStr Can Machine Learning Be Better than Biased Readers?
title_full_unstemmed Can Machine Learning Be Better than Biased Readers?
title_short Can Machine Learning Be Better than Biased Readers?
title_sort can machine learning be better than biased readers?
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10204355/
https://www.ncbi.nlm.nih.gov/pubmed/37218934
http://dx.doi.org/10.3390/tomography9030074
work_keys_str_mv AT hibiatsuhiro canmachinelearningbebetterthanbiasedreaders
AT zhurui canmachinelearningbebetterthanbiasedreaders
AT tyrrellpascaln canmachinelearningbebetterthanbiasedreaders