Cargando…
Evaluation of a decided sample size in machine learning applications
BACKGROUND: An appropriate sample size is essential for obtaining a precise and reliable outcome of a study. In machine learning (ML), studies with inadequate samples suffer from overfitting of data and have a lower probability of producing true effects, while the increment in sample size increases...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9926644/ https://www.ncbi.nlm.nih.gov/pubmed/36788550 http://dx.doi.org/10.1186/s12859-023-05156-9 |
_version_ | 1784888322616721408 |
---|---|
author | Rajput, Daniyal Wang, Wei-Jen Chen, Chun-Chuan |
author_facet | Rajput, Daniyal Wang, Wei-Jen Chen, Chun-Chuan |
author_sort | Rajput, Daniyal |
collection | PubMed |
description | BACKGROUND: An appropriate sample size is essential for obtaining a precise and reliable outcome of a study. In machine learning (ML), studies with inadequate samples suffer from overfitting of data and have a lower probability of producing true effects, while the increment in sample size increases the accuracy of prediction but may not cause a significant change after a certain sample size. Existing statistical approaches using standardized mean difference, effect size, and statistical power for determining sample size are potentially biased due to miscalculations or lack of experimental details. This study aims to design criteria for evaluating sample size in ML studies. We examined the average and grand effect sizes and the performance of five ML methods using simulated datasets and three real datasets to derive the criteria for sample size. We systematically increase the sample size, starting from 16, by randomly sampling and examine the impact of sample size on classifiers’ performance and both effect sizes. Tenfold cross-validation was used to quantify the accuracy. RESULTS: The results demonstrate that the effect sizes and the classification accuracies increase while the variances in effect sizes shrink with the increment of samples when the datasets have a good discriminative power between two classes. By contrast, indeterminate datasets had poor effect sizes and classification accuracies, which did not improve by increasing sample size in both simulated and real datasets. A good dataset exhibited a significant difference in average and grand effect sizes. We derived two criteria based on the above findings to assess a decided sample size by combining the effect size and the ML accuracy. The sample size is considered suitable when it has appropriate effect sizes (≥ 0.5) and ML accuracy (≥ 80%). After an appropriate sample size, the increment in samples will not benefit as it will not significantly change the effect size and accuracy, thereby resulting in a good cost-benefit ratio. CONCLUSION: We believe that these practical criteria can be used as a reference for both the authors and editors to evaluate whether the selected sample size is adequate for a study. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12859-023-05156-9. |
format | Online Article Text |
id | pubmed-9926644 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-99266442023-02-15 Evaluation of a decided sample size in machine learning applications Rajput, Daniyal Wang, Wei-Jen Chen, Chun-Chuan BMC Bioinformatics Research BACKGROUND: An appropriate sample size is essential for obtaining a precise and reliable outcome of a study. In machine learning (ML), studies with inadequate samples suffer from overfitting of data and have a lower probability of producing true effects, while the increment in sample size increases the accuracy of prediction but may not cause a significant change after a certain sample size. Existing statistical approaches using standardized mean difference, effect size, and statistical power for determining sample size are potentially biased due to miscalculations or lack of experimental details. This study aims to design criteria for evaluating sample size in ML studies. We examined the average and grand effect sizes and the performance of five ML methods using simulated datasets and three real datasets to derive the criteria for sample size. We systematically increase the sample size, starting from 16, by randomly sampling and examine the impact of sample size on classifiers’ performance and both effect sizes. Tenfold cross-validation was used to quantify the accuracy. RESULTS: The results demonstrate that the effect sizes and the classification accuracies increase while the variances in effect sizes shrink with the increment of samples when the datasets have a good discriminative power between two classes. By contrast, indeterminate datasets had poor effect sizes and classification accuracies, which did not improve by increasing sample size in both simulated and real datasets. A good dataset exhibited a significant difference in average and grand effect sizes. We derived two criteria based on the above findings to assess a decided sample size by combining the effect size and the ML accuracy. The sample size is considered suitable when it has appropriate effect sizes (≥ 0.5) and ML accuracy (≥ 80%). After an appropriate sample size, the increment in samples will not benefit as it will not significantly change the effect size and accuracy, thereby resulting in a good cost-benefit ratio. CONCLUSION: We believe that these practical criteria can be used as a reference for both the authors and editors to evaluate whether the selected sample size is adequate for a study. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12859-023-05156-9. BioMed Central 2023-02-14 /pmc/articles/PMC9926644/ /pubmed/36788550 http://dx.doi.org/10.1186/s12859-023-05156-9 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data. |
spellingShingle | Research Rajput, Daniyal Wang, Wei-Jen Chen, Chun-Chuan Evaluation of a decided sample size in machine learning applications |
title | Evaluation of a decided sample size in machine learning applications |
title_full | Evaluation of a decided sample size in machine learning applications |
title_fullStr | Evaluation of a decided sample size in machine learning applications |
title_full_unstemmed | Evaluation of a decided sample size in machine learning applications |
title_short | Evaluation of a decided sample size in machine learning applications |
title_sort | evaluation of a decided sample size in machine learning applications |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9926644/ https://www.ncbi.nlm.nih.gov/pubmed/36788550 http://dx.doi.org/10.1186/s12859-023-05156-9 |
work_keys_str_mv | AT rajputdaniyal evaluationofadecidedsamplesizeinmachinelearningapplications AT wangweijen evaluationofadecidedsamplesizeinmachinelearningapplications AT chenchunchuan evaluationofadecidedsamplesizeinmachinelearningapplications |