Cargando…

Codabench: Flexible, easy-to-use, and reproducible meta-benchmark platform

Obtaining a standardized benchmark of computational methods is a major issue in data-science communities. Dedicated frameworks enabling fair benchmarking in a unified environment are yet to be developed. Here, we introduce Codabench, a meta-benchmark platform that is open sourced and community drive...

Descripción completa

Detalles Bibliográficos
Autores principales: Xu, Zhen, Escalera, Sergio, Pavão, Adrien, Richard, Magali, Tu, Wei-Wei, Yao, Quanming, Zhao, Huan, Guyon, Isabelle
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9278500/
https://www.ncbi.nlm.nih.gov/pubmed/35845844
http://dx.doi.org/10.1016/j.patter.2022.100543
_version_ 1784746200076910592
author Xu, Zhen
Escalera, Sergio
Pavão, Adrien
Richard, Magali
Tu, Wei-Wei
Yao, Quanming
Zhao, Huan
Guyon, Isabelle
author_facet Xu, Zhen
Escalera, Sergio
Pavão, Adrien
Richard, Magali
Tu, Wei-Wei
Yao, Quanming
Zhao, Huan
Guyon, Isabelle
author_sort Xu, Zhen
collection PubMed
description Obtaining a standardized benchmark of computational methods is a major issue in data-science communities. Dedicated frameworks enabling fair benchmarking in a unified environment are yet to be developed. Here, we introduce Codabench, a meta-benchmark platform that is open sourced and community driven for benchmarking algorithms or software agents versus datasets or tasks. A public instance of Codabench is open to everyone free of charge and allows benchmark organizers to fairly compare submissions under the same setting (software, hardware, data, algorithms), with custom protocols and data formats. Codabench has unique features facilitating easy organization of flexible and reproducible benchmarks, such as the possibility of reusing templates of benchmarks and supplying compute resources on demand. Codabench has been used internally and externally on various applications, receiving more than 130 users and 2,500 submissions. As illustrative use cases, we introduce four diverse benchmarks covering graph machine learning, cancer heterogeneity, clinical diagnosis, and reinforcement learning.
format Online
Article
Text
id pubmed-9278500
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-92785002022-07-14 Codabench: Flexible, easy-to-use, and reproducible meta-benchmark platform Xu, Zhen Escalera, Sergio Pavão, Adrien Richard, Magali Tu, Wei-Wei Yao, Quanming Zhao, Huan Guyon, Isabelle Patterns (N Y) Descriptor Obtaining a standardized benchmark of computational methods is a major issue in data-science communities. Dedicated frameworks enabling fair benchmarking in a unified environment are yet to be developed. Here, we introduce Codabench, a meta-benchmark platform that is open sourced and community driven for benchmarking algorithms or software agents versus datasets or tasks. A public instance of Codabench is open to everyone free of charge and allows benchmark organizers to fairly compare submissions under the same setting (software, hardware, data, algorithms), with custom protocols and data formats. Codabench has unique features facilitating easy organization of flexible and reproducible benchmarks, such as the possibility of reusing templates of benchmarks and supplying compute resources on demand. Codabench has been used internally and externally on various applications, receiving more than 130 users and 2,500 submissions. As illustrative use cases, we introduce four diverse benchmarks covering graph machine learning, cancer heterogeneity, clinical diagnosis, and reinforcement learning. Elsevier 2022-06-24 /pmc/articles/PMC9278500/ /pubmed/35845844 http://dx.doi.org/10.1016/j.patter.2022.100543 Text en © 2022 The Authors https://creativecommons.org/licenses/by/4.0/This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Descriptor
Xu, Zhen
Escalera, Sergio
Pavão, Adrien
Richard, Magali
Tu, Wei-Wei
Yao, Quanming
Zhao, Huan
Guyon, Isabelle
Codabench: Flexible, easy-to-use, and reproducible meta-benchmark platform
title Codabench: Flexible, easy-to-use, and reproducible meta-benchmark platform
title_full Codabench: Flexible, easy-to-use, and reproducible meta-benchmark platform
title_fullStr Codabench: Flexible, easy-to-use, and reproducible meta-benchmark platform
title_full_unstemmed Codabench: Flexible, easy-to-use, and reproducible meta-benchmark platform
title_short Codabench: Flexible, easy-to-use, and reproducible meta-benchmark platform
title_sort codabench: flexible, easy-to-use, and reproducible meta-benchmark platform
topic Descriptor
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9278500/
https://www.ncbi.nlm.nih.gov/pubmed/35845844
http://dx.doi.org/10.1016/j.patter.2022.100543
work_keys_str_mv AT xuzhen codabenchflexibleeasytouseandreproduciblemetabenchmarkplatform
AT escalerasergio codabenchflexibleeasytouseandreproduciblemetabenchmarkplatform
AT pavaoadrien codabenchflexibleeasytouseandreproduciblemetabenchmarkplatform
AT richardmagali codabenchflexibleeasytouseandreproduciblemetabenchmarkplatform
AT tuweiwei codabenchflexibleeasytouseandreproduciblemetabenchmarkplatform
AT yaoquanming codabenchflexibleeasytouseandreproduciblemetabenchmarkplatform
AT zhaohuan codabenchflexibleeasytouseandreproduciblemetabenchmarkplatform
AT guyonisabelle codabenchflexibleeasytouseandreproduciblemetabenchmarkplatform