TY - JOUR
T1 - A Review of Benchmark and Test Functions for Global Optimization Algorithms and Metaheuristics
AU - Naser, M. Z.
AU - Al-Bashiti, Mohammad Khaled
AU - Tapeh, Arash Teymori Gharah
AU - Naser, Ahmad
AU - Kodur, Venkatesh
AU - Hawileh, Rami
AU - Abdalla, Jamal
AU - Khodadadi, Nima
AU - Gandomi, Amir H.
AU - Eslamlou, Armin Dadras
N1 - Publisher Copyright:
© 2025 The Author(s). WIREs Computational Statistics published by Wiley Periodicals LLC.
PY - 2025/6
Y1 - 2025/6
N2 - Benchmarking in optimization is a critical step in evaluating the performance, robustness, and scalability of machine learning algorithms and metaheuristics. While trends in benchmark design continue to evolve, synthetic functions remain vital for fundamental stress tests and theoretical evaluations. As several benchmark and test functions have been developed and derived over the past decades, little attention has been given to classifying such test functions and the rationale behind their usage. From this lens, this paper reviews and categorizes a broad range of functions often employed in assessing optimizers and metaheuristics. More specifically, we classify test functions based on modality, dimensionality, separability, smoothness, constraints, and noise characteristics to offer a broad view that aids in selecting appropriate benchmarks for various algorithmic challenges. Then, this review also discusses in detail the 25 most commonly used functions in the open literature and proposes two new, highly dimensional, dynamic, and challenging functions that could be used for testing new algorithms. Finally, this review identifies gaps in current benchmarking practices and directions for future research, as well as suggests best practices and guidelines.
AB - Benchmarking in optimization is a critical step in evaluating the performance, robustness, and scalability of machine learning algorithms and metaheuristics. While trends in benchmark design continue to evolve, synthetic functions remain vital for fundamental stress tests and theoretical evaluations. As several benchmark and test functions have been developed and derived over the past decades, little attention has been given to classifying such test functions and the rationale behind their usage. From this lens, this paper reviews and categorizes a broad range of functions often employed in assessing optimizers and metaheuristics. More specifically, we classify test functions based on modality, dimensionality, separability, smoothness, constraints, and noise characteristics to offer a broad view that aids in selecting appropriate benchmarks for various algorithmic challenges. Then, this review also discusses in detail the 25 most commonly used functions in the open literature and proposes two new, highly dimensional, dynamic, and challenging functions that could be used for testing new algorithms. Finally, this review identifies gaps in current benchmarking practices and directions for future research, as well as suggests best practices and guidelines.
KW - artificial intelligence
KW - optimization
KW - test functions
UR - https://www.scopus.com/pages/publications/105006557866
U2 - 10.1002/wics.70028
DO - 10.1002/wics.70028
M3 - Review article
AN - SCOPUS:105006557866
SN - 1939-5108
VL - 17
JO - Wiley Interdisciplinary Reviews: Computational Statistics
JF - Wiley Interdisciplinary Reviews: Computational Statistics
IS - 2
M1 - e70028
ER -