arXiv Analytics

Sign in

arXiv:1703.00512 [cs.LG]AbstractReferencesReviewsResources

PMLB: A Large Benchmark Suite for Machine Learning Evaluation and Comparison

Randal S. Olson, William La Cava, Patryk Orzechowski, Ryan J. Urbanowicz, Jason H. Moore

Published 2017-03-01Version 1

The selection, development, or comparison of machine learning methods in data mining can be a difficult task based on the target problem and goals of a particular study. Numerous publicly available real-world and simulated benchmark datasets have emerged from different sources, but their organization and adoption as standards have been inconsistent. As such, selecting and curating specific benchmarks remains an unnecessary burden on machine learning practitioners and data scientists. The present study introduces an accessible, curated, and developing public benchmark resource to facilitate identification of the strengths and weaknesses of different machine learning methodologies. We compare meta-features among the current set of benchmark datasets in this resource to characterize the diversity of available data. Finally, we apply a number of established machine learning methods to the entire benchmark suite and analyze how datasets and algorithms cluster in terms of performance. This work is an important first step towards understanding the limitations of popular benchmarking suites and developing a resource that connects existing benchmarking standards to more diverse and efficient standards in the future.

Comments: 14 pages, 5 figures, submitted for review to JMLR
Categories: cs.LG, cs.AI
Related articles: Most relevant | Search more
arXiv:2007.08663 [cs.LG] (Published 2020-07-16)
TUDataset: A collection of benchmark datasets for learning with graphs
arXiv:2212.10735 [cs.LG] (Published 2022-12-21)
NADBenchmarks -- a compilation of Benchmark Datasets for Machine Learning Tasks related to Natural Disasters
arXiv:2505.05064 [cs.LG] (Published 2025-05-08)
WaterDrum: Watermarking for Data-centric Unlearning Metric
Xinyang Lu et al.