Optimization addresses the problem of finding the best possible choices with respect to a given target, not violating a number of restrictions

In mathematical terms, optimization is the problem of minimizing (or maximizing) a prescribed function, the objective function, while obeying a number of equality and inequality constraints Depending on the area of definition of these functions, one can differentiate various classes of optimization problems, continuous problems, discrete problems, and mixed-integer problems.

Since DANZIG [dantzig] 1947 invented the simplex algorithm, optimization problems have been solved in many contexts Till today, a great number of all optimization problems used in industrial applications are linear.

Linear programming (solving linear problems) is special in many ways One important property is that there is only one notion of optimality Every point which is locally optimal (ie a point that satisfies all constraints and has the best objective function value of all such points in a neighborhood) is automatically globally optimal (ie has the absolutely best achievable objective within the given constraints) For non-linear problems this is in general far from true Not all relations encountered in real life are linear, however, and non-linear functions are needed for the objective and/or the constraints to make the models fit reality better.

Solving nonlinear optimization problems traditionally means to search for a local optimum Many applications nowadays are driven by economical interest (eg, traveling salesman problem, packing problems, or cutting stock problems), and for earning money it usually is sufficient to gain an improvement of say 10% over the solution currently used, or used by the competitors Solving local optimization problems can usually be done with moderate effort, even for industrial-size problems.

For a number of applications, however, knowing such a ``good'' feasible point is not enough, and for some other problems just finding a feasible point (ie a point satisfying all constraints) is exorbitantly difficult There are problems in robotics (see LEE ET AL [LeeM,LeeMM] and NEUMAIER[Neurob]), in chemical engineering (cf FLOUDAS [FloMI,Floch], and FLOUDAS ET AL [FloPA]), in protein folding (see KOSTROWICKI & SCHERAGA [KosS] and NEUMAIER [Neuprot]), in safety verification and worst case analysis, and in mathematics (eg, Kepler's conjecture, see HALES [Hal]), which can only be solved by finding the global optimum of the problem.

Depending on the properties of the objective function and of the constraints, global optimization can be a highly non-trivial problem, in general, it is NP-hard Therefore, for every generic algorithm there will, very likely, be problems for which finding the solution needs effort exponential in the size of the problem This is a consequence of the famous conjecture, see SIPSER [Sips] or BAKER ET AL [BakGS] for the definition of the conjecture, and PAPADIMITROU [Papadi] and PARDALOS & SCHNITGER [ParS] for the connection to optimization.

During the last decade the interest in global optimization has been ever increasing, partly because there are more and more applications, partly because the development of algorithms has already moved the field to the point where many small but already industrial-size applications can be solved reliably (see SHCHERBINA ET AL [ShN]).

For a large number of applications even finding feasible points is difficult; in particular local optimization programs fail Sometimes evaluating constraints or the objective function is expensive For these situations a number of heuristic algorithms have been developed (see JONES ET AL [JonSW], HUYER & NEUMAIER [HuyN,HuyN2], and for a comparison of different heuristic methods JANKA [Janka]) They perform an incomplete search guided by sophisticated concepts and usually end up at or near very good local optima, often at the global one However, they can never be certain that they actually have found the global optimizer.

For safety applications, chemical phase and equilibrium problems, and mathematical applications incomplete methods are not applicable, because there it is required to find the global optimum with certainty Performing a complete search, covering the whole feasible set, needs an effort exponential in the problem dimension if implemented naively To overcome that difficulty, research groups have started to attack the field from various directions.

The basic principle, the branch-and-bound method, is common to most of these approaches The search region is recursively split into smaller pieces For each of these pieces the algorithm tries to prove that it does not contain the global optimum (bound step), if it succeeds, the piece is discarded, and if not the piece is broken into even smaller pieces (branch step) Once all pieces have either been discarded or have reached a very small size, the algorithm finishes.

The main difference in the approaches to global optimization taken by the different research teams lies in the bounding step Depending on their tradition they use different methods for analyzing subproblems during the branch-and-bound scheme.

Methods based on Interval Analysis compute bounds on objective function and constraints using interval arithmetic and mean value forms with interval derivatives or slopes Areas around local optima are handled by interval Newton methods like the Krawczyk iteration.

If the objective function is constant (or equivalently missing), the global optimization problem becomes a constraint satisfaction problem For those problems a tradition exists, that comes from constraint logic programming In the logic programming languages Prolog V and ECLiPSe, discrete constraint satisfaction problems are formulated They are considered solved if one feasible point has been found The solution algorithms use special combinations of backtracking and constraint propagation for tracking down the solution Combining these techniques with intervals leads to the generalized form of constraint propagation used for attacking continuous constraint satisfaction problems Every global optimization problem can be reduced to a sequence of constraint satisfaction problems, so constraint propagation is a relevant technique for solving those problems, too.

Very successful for solving global optimization problems are relaxation techniques In this approach the complicated nonlinear problems are replaced by linear or convex relaxations, which can easily be solved by standard software Since they are relaxations, their solutions provide lower bounds on the possible values of the objective function for a given subproblem If this lower bound is higher than the function value of the best point found so far, the subproblem can be discarded.

There are even more ideas, like symbolic transformations, semidefinite relaxation, methods from algebraic geometry like Groebner bases, which have been successfully applied to solving global optimization problems.

In the past all the research teams have pursued their techniques with limited interaction, only The most interesting fact about all those approaches is that they complement each other It is not like in local optimization, where the different classes of algorithms compete In global optimization, the various ideas can be combined, and there are synergetic effects Eg, the combination of constraint propagation, interval analysis and linear underestimation produces considerably stronger linear relaxations than any single method could achieve.

This book is intended as a short introduction to modeling and solving global optimization problems, keeping the integration of techniques from various fields in mind The main focus of the book will be on complete methods, which can be used to yield guaranteed results The techniques presented in this book require that objective function and constraints are given by mathematical formulae There will be no discussion on black-box optimization and surrogate modeling To make the integration technically feasible, the book also contains an exposition of an open-source solver platform for global optimization, which was developed during the COCONUT project [coconut] funded by the European Community.

The book, and the software platform presented in here, can be seen as an effort to provide the technological basis for the global optimization community to join their forces, and hopefully, this will yield the development of global optimization solvers which can solve bigger problems faster and more reliably.