Stopping Rules for Global Optimization

One of the major problems in heuristics for global optimization is the choice of an adequate stopping rule. In the presentation of test results, methods are usually compared on the basis of their performance on problems with known solutions. However, in practical problems, one does not know the solution in advance, and needs a criterion that tells the program when to stop searching for a better local minimizer.

The criterion should be stringent enough that it does not waste too many function values after the global minimum has been found, but it should also be loose enough to ensure that in typical cases, the algorithm does not terminate before the global minimizer has been found. That these two goals are conflicting makes the problem so difficult.

Stochastic approaches to the design of suitable stopping criteria are surveyed in Section 6 of the article

A recurring simple scheme in this discussion is that an algorithm that employs multiple local searches should be stopped when the number n of local minimizations done is related to the number w of different local minima obtained by n>N(w), where N(w) is a function depending on the assumptions made. (Several specific implicit definitions of N(w) are given in the above reference.)

This result is theoretically justified for the multiple random start method only, but may serve as a guideline also for other methods that use multiple local searches. A suitable function N(w) should probably be determined empirically for each algorithm by testing on a large number of cases obtained by randomly varying algorithmic choices or test problem characteristics (such as bound constraints).


Global Optimization
my home page (http://www.mat.univie.ac.at/~neum)

Arnold Neumaier (Arnold.Neumaier@univie.ac.at)