Cargando…

PMLB: a large benchmark suite for machine learning evaluation and comparison

BACKGROUND: The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their...

Descripción completa

Detalles Bibliográficos
Autores principales: Olson, Randal S., La Cava, William, Orzechowski, Patryk, Urbanowicz, Ryan J., Moore, Jason H.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5725843/
https://www.ncbi.nlm.nih.gov/pubmed/29238404
http://dx.doi.org/10.1186/s13040-017-0154-4
_version_ 1783285615620521984
author Olson, Randal S.
La Cava, William
Orzechowski, Patryk
Urbanowicz, Ryan J.
Moore, Jason H.
author_facet Olson, Randal S.
La Cava, William
Orzechowski, Patryk
Urbanowicz, Ryan J.
Moore, Jason H.
author_sort Olson, Randal S.
collection PubMed
description BACKGROUND: The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. RESULTS: The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. CONCLUSIONS: This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.
format Online
Article
Text
id pubmed-5725843
institution National Center for Biotechnology Information
language English
publishDate 2017
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-57258432017-12-13 PMLB: a large benchmark suite for machine learning evaluation and comparison Olson, Randal S. La Cava, William Orzechowski, Patryk Urbanowicz, Ryan J. Moore, Jason H. BioData Min Research BACKGROUND: The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. RESULTS: The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. From this study, we find that existing benchmarks lack the diversity to properly benchmark machine learning algorithms, and there are several gaps in benchmarking problems that still need to be considered. CONCLUSIONS: This work represents another important step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future. BioMed Central 2017-12-11 /pmc/articles/PMC5725843/ /pubmed/29238404 http://dx.doi.org/10.1186/s13040-017-0154-4 Text en © The Author(s) 2017 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
spellingShingle Research
Olson, Randal S.
La Cava, William
Orzechowski, Patryk
Urbanowicz, Ryan J.
Moore, Jason H.
PMLB: a large benchmark suite for machine learning evaluation and comparison
title PMLB: a large benchmark suite for machine learning evaluation and comparison
title_full PMLB: a large benchmark suite for machine learning evaluation and comparison
title_fullStr PMLB: a large benchmark suite for machine learning evaluation and comparison
title_full_unstemmed PMLB: a large benchmark suite for machine learning evaluation and comparison
title_short PMLB: a large benchmark suite for machine learning evaluation and comparison
title_sort pmlb: a large benchmark suite for machine learning evaluation and comparison
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5725843/
https://www.ncbi.nlm.nih.gov/pubmed/29238404
http://dx.doi.org/10.1186/s13040-017-0154-4
work_keys_str_mv AT olsonrandals pmlbalargebenchmarksuiteformachinelearningevaluationandcomparison
AT lacavawilliam pmlbalargebenchmarksuiteformachinelearningevaluationandcomparison
AT orzechowskipatryk pmlbalargebenchmarksuiteformachinelearningevaluationandcomparison
AT urbanowiczryanj pmlbalargebenchmarksuiteformachinelearningevaluationandcomparison
AT moorejasonh pmlbalargebenchmarksuiteformachinelearningevaluationandcomparison