GICOLAG

Global Optimization - Integrating Convexity, Optimization, Logic Programming, and Computational Algebraic Geometry

October 1 - December 31, 2006

A visiting and research program by the

International Erwin Schrödinger Institute for Mathematical
Physics (ESI)

Vienna, Austria,

providing a meeting place for leading experts in the international
scientific community,

nurturing the development and exchange of ideas.

Organized by

There was an international workshop on the topic of the program from December 4-8, 2006.

Workshop program

(with slides of most talks)

Other talks given during the program

(with slides of most talks)

Final report on the GICOLAG program

Global Optimization is the task of finding the absolutely best set of admissible conditions to achieve an objective under given constraints, assuming that both are formulated in mathematical terms. A special case is the constraint satisfaction problem, where one just wants to find one or all solutions of a given set of constraints.

Meeting the challenge of real applications often involves the creation and utilization of new theoretical tools that enable one to exploit the special features of the application. As in many areas with difficult computational problems, success is more often due to theoretical advances rather than to increases in raw computer power or more clever use of existing software packages.

Progress in these areas and in global optimization in general may also have significant impact on the design and checkability of computer-assisted proofs. For example, the proof of the Kepler conjecture is still unchecked, being of significantly larger complexity than earlier computer-assisted proofs such as that for the stability of relativistic matter. Better theory and algorithms would very likely compress the computer-assisted proof to a size that automatic proof-checking with human-readable checking protocols becomes tractable.

There are a number of different approaches to solving global optimization problems and related problems. As documented in a recent survey, the traditional approaches for these tasks are slowly growing together to a unified whole, making the solution of difficult global optimization problems tractable in dimensions relevant for real-life applications.

However, this process is hampered by the fact that the approaches are substantially different and draw their strengths from mathematical traditions -- numerical analysis, convex analysis, optimization theory, combinatorics, logic, algebraic geometry -- which have little common theoretical background.

The present program has the goal of enabling extensive discussions between experts in the respective theories, investigating points of contact and synergy effects, and collaborating in developing combination and solution strategies that utilize all approaches in an integrated way.

Constraint programming is a logic-based approach to solve constraint satisfaction problems, i.e. finding all or some points satisfying a given set of constraints. Apart from forming an important class of global problems on their own, they also arise naturally as subproblems during the solution of a global optimization problem.

Interval analysis is a perfect tool for automatically computing estimates on function values, derivatives, bounds on nonlinearities, and the rigorous numerical solution of many standard problems. As such it is indispensable for non-combinatorial computer-assisted proofs. Using interval analysis, the constraint logic approach for solving discrete constraint satisfaction problems can be adapted to the continuous case. In return, the constraint propagation methods developed for constraint satisfaction can help to enhance the estimates computed by interval analysis.

The well developed theory of local optimization plays a central role for global optimization; finding a local optimizer is an important step of solving the global problem, and succeeds if the starting point is close enough to the global minimizer. On the other hand, local optimization software requires the knowledge of a good starting point for the search. For many problems, finding such a starting point is a difficult task, and the global search routines developed for global optimization can help there.

Mixed integer linear programming has for long been the only way to solving larger global optimization problems at least approximately, by replacing nonlinearities with piecewise linear substitutes. The highly developed theory and algorithms in this area are ripe for combination with interval techniques that can exploit directly the original nonlinearities. Integration here may be the key to the solvability of many large-scale problems.

One of the most successful techniques of the past is the recursive solution of suitable relaxations of global optimization problems. In the past, mostly linear relaxations were used. The dramatic progress in semidefinite programming combined with powerful theorems from real algebraic geometry - that prove the approximability of sets of polynomial constraints by convex constraints in a higher-dimensional representation leads one to semidefinite programming relaxations which promise to have a large impact on the solution of hard global optimization problems. For example, it resulted in a Matlab package GloptiPoly for solving polynomial global optimization problems, and in computationally attractive necessary and sufficient conditions for polynomial global optimization.

Algebraic geometry also contributes in a second way, by providing systematic methods for the automatic symbolic transformation of constraints into a computationally more useful form, which is critical in some applications.

Apart from the more theoretical issues, there are other issues that need attention for the successful integration of the various techniques. In general, it is very hard to predict which strategies and algorithms will prove most reliable and successful, showing the need for a good testing platform. The broad availability and use of modeling languages makes it easy to experiment with various solvers and strategies to find the one best suited to an application area or problem class. It also facilitates the benchmarking of competing implementations, thus providing a further incentive for competition and resulting progress. These benchmarking results could be used in data mining and analysis process to provide automatic strategy selection. In addition, benchmarking of optimization algorithms is in itself a nontrivial problem in the community, and careful benchmarking on a well-chosen problem collection will provide a significant boost to the practice of global optimization by providing a reliable quality assessment.