Griewank function
The Griewank test function is a smooth multidimensional mathematical function used in unconstrained optimization. It is commonly employed to evaluate the performance of global optimization algorithms. The function is defined[1] as:
where is a vector of real-valued variables. The nonlinear and nonconvex function is characterized by its unique multimodal structure, featuring multiple local minima and maxima around its single global minimum at , where the function has minimal value . The number of stationary points increases as the dimensionality grows.
For instance, the one-dimensional Griewank function
has 6365 critical points in the interval [−10000,10000], where the first derivative vanishes. These points satisfy the equation .
The multimodal structure of the Griewank function presents a challenge for many deterministic optimization algorithms, which may become trapped in one of the numerous local minima. This is particularly problematic for gradient-based methods. The Griewank function is commonly used to benchmark global optimization algorithms, such as genetic algorithms or particle swarm optimization. In addition to the original version, there are several variants of the Griewank function specifically designed to test algorithms in high-dimensional optimization scenarios[2].
Non-Smooth Variant of the Griewank Function
[edit]A non-smooth version of the Griewank function has been developed[3] to emulate the characteristics of objective functions frequently encountered in optimization problems from machine learning (ML). These functions often exhibit piecewise smooth or non-smooth behavior due to the presence of regularization terms, activation functions, or constraints in learning models.
A non-smooth variant of the Griewank function is the following:
By incorporating non-smooth elements, such as absolute values in the cosine and sine terms, this function mimics the irregularities present in many ML loss landscapes. It offers a robust benchmark for evaluating optimization algorithms, especially those designed to handle the challenges of non-convex, non-smooth, or high-dimensional problems, including sub-gradient, hybrid, and evolutionary methods.
The function's resemblance to practical ML objective functions makes it particularly valuable for testing the robustness and efficiency of algorithms in tasks such as hyperparameter tuning, neural network training, and constrained optimization.
References
[edit]- ^ Griewank, A. O. "Generalized Descent for Global Optimization." J. Opt. Th. Appl. 34, 11–39, 1981
- ^ Locatelli, M. "A Note on the Griewank Test Function." J. Global Opt. 25, 169–174, 2003
- ^ Bosse, Torsten F.; Bücker, H. Martin (2024-10-29). "A piecewise smooth version of the Griewank function". Optimization Methods and Software: 1–11. doi:10.1080/10556788.2024.2414186. ISSN 1055-6788.