Some Hard Global Optimization Test Problems

Note that this papge dates from 1996, and is preserved only for historical reasons. The problems are no longer hard for good current solvers.

Some Results by George Bilchev


two global optimization problems with variable degree of difficulty

f(x) = sum_{k=1}^n (sum_{i=1}^n [i^k+beta][(x_i/i)^k-1])^2

Suitable bounds which can be optionally imposed are
x_i in [-n,n] for i=1,...,n.
The global minimum is at x_i=i with f(x)=0.

Suggested specific cases: (n,beta) = (4,50), (4,0.5), (10,10^9), (10,10^7); the first and third are easy, the other two somewhat harder. Since these are nowadays also easy, increase n or decrease beta (but see below for tiny beta).

f(x) = sum_{k=1}^n (sum_{i=1}^n [i+beta][(x_i)^k-(1/i)^k])^2

Suitable bounds which can be optionally imposed are
x_i in [-1,1] for i=1,...,n.
The global minimum is at x_i=1/i with f(x)=0.

Suggested specific cases: (n,beta) = (4,10) (10,100) but since these have become easy, increase n or decrease beta.

beta is a nonnegative parameter. The smaller beta, the more difficult both problems become since the global minimum is difficult to distinguish from local minima near permuted solutions. For beta=0, every permuted solution is a global minimum, too.
For tiny beta, the problem becomes (apparently) simpler for algorithms trying to find only one global minimum point since one of the many near-global local optima is found instead.

These problems therefore appear useful to test the ability of a global minimization algorithm to reach the global minimum sucessfully and to discriminate it from other local minima.

Because of the special permutation structure of the problems, genetic algorithms which can exchange components of different candidates might have a slight advantage.


a test problem where the Hessian at the solution can be made arbitrarily ill-conditioned or even singular.

f(x) = sum_{k=1}^n ([sum_{i=1}^n x_i^k] - b_k)^2

The global minimum has the value 0 if the b_k are chosen as b_k = sum_{i=1}^n z_i^k for a target solution z.

No other solutions exist. (Proof: Use the theory of symmetric functions to rewrite the problem as one of finding the zeros of a polynomial with coefficients determined by b.)

Of interest is how the work scales for

POWERSUM1(n): b from z_i = 1/i, bounds x_i in [0,2],
POWERSUM2(n): b from x_i = i-1, bounds x_i in [-n,n],

when n=5,10,20,30,40,50,100

Suitable bound constraints for the general problems are x_i in [u,v] (i=1,...,n), where u=min(z_i), v=max(z_i).

All permutations of a solution are solutions, too. In branch and bound codes, one may add the constraint x_1 le x_2 le ... le x_n to avoid the exponential increase of work due to permuted solutions.

The problem becomes harder when a solution with multiple x_i exists, since then the rank of the Hessian at the solution decreases, and the objective function becomes there extremely flat in some directions. Thus the problem can assess the ability of algorithms to find highly accurate solutions near singular minimal points.

A simple instance is the singular problem POWERSUM(8,18,44,114) with bounds [0,4]^4 and solution (1,2,2,3). The goal here is to find the solution with high accuracy, not just the optimal function value (which is much easier to find to high absolute accuracy).


a very simple large-scale problem

f(x)=[sum_{i=1}^n (x_i-1)^2] - [sum_{i=2}^n x_ix_{i-1}]

Suitable bounds for a bound constrained version would be [-n^2,n^2] for each component.
The solution is x_i=i(n+1-i) with f(x)=-n(n+4)(n-1)/6.

This is a simple discretized variational problem, convex, quadratic, with a unique local minimizer and a tridiagonal Hessian. The scaling behaviour for increasing n gives an idea on the efficiency of localizing minima once the region of attraction (which here is everything) is found; most local methods only need O(n^2) function evaluations, or only O(n) (function+gradient) evaluations. A global optimization code that has difficulties with solving this problem for n=100, say, is of limited worth only.

The strong coupling between the variables causes difficulties for genetic algorithms.

The problem is typical for many problems from control theory, though the latter are usually nonquadratic and often nonconvex.

A toy model for protein folding

Equations (2.2) - (2.6) in F.H. Stillinger, Phys. Rev. E 48 (1993), 1469-1477 represent perhaps the simplest nontrivial simplified version of protein folding problems. Each of these problems has exponentially many local optima. See also here.

Results of fairly exhaustive multiple random start searches for these problems are available from F.H. Stillinger ( and from G. Bilchev. See also Studies of an off-lattice model for protein folding (375K, by A. Irbäck. and F. Potthast)

Optimization Test Problem Collection
Suggestions for a global optimization contest
Global Optimization
my home page (

Arnold Neumaier (