Optimization Visitors Seminar Series -Abstracts

Many physical phenomena in engineering design can be described by computational
models of high physical fidelity or numerical accuracy.
Optimization with high-fidelity simulations gives rise to large-scale
nonlinear programming problems (NLP) due to the large number of state
variables and the associated computational cost of solving (coupled)
differential equations that govern the behavior of the system.
Straightforward use of high-fidelity models, such as the Navier-Stokes
equations or those based on fine computational meshes, in iterative
procedures can be prohibitively expensive. We discuss a first-order,
variable-fidelity model management scheme for solving optimization problems
whose function evaluations involve outputs of simulations. The approach
reduces the cost of optimization by systematically using lower-fidelity
models or surrogates, with occasional recourse to high-fidelity computations
for model re-calibration. Convergence to high-fidelity results is maintained.
We discuss computation with variable-resolution and variable physical
fidelity models, as well as a variety of response surface approximations.

We present a new framework for efficiently finding competitive solutionsfor
the facility layout problem. This framework is based on thecombination
of two new mathematical programming models. The first model isa convex
relaxation of the layout problem and is intended to find goodstarting
points for the iterative algorithm used to solve the second model. The
second model is an exact formulation of the facility layout problem
as a non-convex mathematical program with equilibrium constraints (MPEC).
Aspect ratio constraints, which are frequently used in facility layout
methods to restrict the occurrence of overly long and narrow departments
in the computed layouts, are easily incorporated into this new framework.
We present computational results showing that both models, and hence
the complete framework, can be solved efficiently using widely available
optimization software. This important feature of the new framework implies
that it can be used to find competitive layouts with relatively little
computational effort. This is advantageous for a user who wishes to
consider several competitive layouts rather than simply using the mathematically
optimal layout.

Heinz Bauschke, University of Guelph

The method of reflection-projections for convex feasibility problems
with an obtuse cone.

The convex feasibility problem consists of finding a point in the intersection
of closed convex sets. The classical method of cyclic projections approximates
a solution by projecting cyclically onto the constraint sets.

We consider the case when one of the constraints is an obtuse cone,
e.g., the nonnegative orthant or the positive semidefinite matrices.
We obtain a convergence result when the projection onto the cone is
replaced by the reflection. Numerical experiments suggest that this
approach is faster than the method of cyclic projections. We also comment
on the inconsistent case.

Based on joint work with Serge Kruk (Oakland University).

Richard J. Caron, Dean of Science, Department of Mathematics
and Statistics, University of Windsor

Optimization methods for inequality constrained optimization problems
attempts to incorporate constraints' information into the search direction
in a variety of ways. Interior-point methods incorporate constraint
information by including a barrier term or by using the complementarity
equations in the KKT system. In this talk a different approach will
be presented to build constraints' information directly in by the use
of "nonlinear scaling". The resulting first and second order
search directions will be discussed and preliminary numerical results
will be presented.

Robert M. Freund, MIT Sloan School of Management

Two Topics on the Complexity of Convex Optimization, one Computational
and one Theoretical

We consider the following quite general convex optimization
problem format: $$\begin{array}{lclr} G_d: & {\rm minimize}_x &
c^{T}x
\\ & \mbox{ s.t. } & Ax=b\\ & & x \in P, \\
\end{array}$$
\noindent where $P$ is a closed convex set, not necessarily a cone.
Linear optimization is a special case of this format in the case when
$P$ is polyhedral, and a modern complexity theory for linear optimization
has been developed based on a notion of condition number $C(d)$ for
the data $d=(A,b,c)$ for $G_d$. We develop some computational experience
and test the practical relevance of this condition number theory, as
applied to problem instances that one might encounter in practice, using
the NETLIB suite of LP problem instances. We present empirical evidence
that indicates that 42% of the variation in IPM iterations among the
NETLIB suite problem instances is accounted for by log C(d) of the problem
instances after pre-processing.

Our theoretical work concerns the complexity of convex optimization
using geometry-based measures rather than algebraic or data-based measures
for obtaining complexity bounds. We bound the complexity of computing
an almost-optimal solution of $G_d$ in terms of natural geometry-based
measures of the feasible region and the level-set of almost-optimal
solutions, relative to a given {\em reference point} $x^r$ that might
be close to the feasible region and/or the almost-optimal level set.

Hugo Lara, research visitor at University of Waterloo

Condition and Complexity Measures for Infeasibility Certificates
of Systems of Linear Inequalities and Their Sensitivity Analysis

In this work we introduce a new condition measure $\xi(A)$ for the
linear feasibility problem. This measure is based on the signing
structure of the subspaces generated by the matrix $A$ that defines
the problem. We consider two other condition numbers
$\bar\chi(A)$ and $\sigma(A)$, studied by Vavasis and Ye \ref{} in the
setting of interior point algorithms in linear programming. In
\ref{tuncel} Tun\c cel studied the connection between $\bar\chi(A)$
and $\sigma(A)$ by exploiting their signing structure, and pointed at
the existence of families of condition numbers (as $\xi(A)$)``between''
them. We provide a dual characterization of
$\xi(A)$ and use it to study infeasibility detection via a constructive
proof of a Helly-type theorem, in the same way that
Ho and Tun\c cel \ref{ho-tuncel} do using $\bar\chi(A)$. We also provide
characterizations of $\bar\chi(A)$ when different norms are used. By
exploiting the signing structure and it connection with oriented matroids,
we deal with sensitivity analysis of
$\bar\chi_1(A)$, $\sigma(A)$ and $\xi(A)$. We give geometric and algebraic
interpretations of perturbations of the matrix $A$ that
preserve the continuity of such condition numbers. Finally, generalizations
to convex cones and relationship with Renegar's condition measure are
provided.

as well as their respective "integral "and "counting"
versions Q_1-> int_{Ax<=b;x>=0} exp{c'x} dx and Q_2 -> sum_{Ax<=b;x
integer}
exp{c'x}
There is a well known duality for P_1.
Perhaps less known are also dualities for Q_1 and Q_2;

We present a way of exploiting partial separability in particular global
optimization problems. The aim is to reduce significantly the
dimension of the search space. The procedure relies on graph theory
tools such as graph partitioning with node separators. We illustrate
the idea on a distance geometry problem which arise in the interpretation
of nuclear magnetic resonance data and in the determination of protein
structures.

Several hard combinatorial optimization problems, such as Max-Cut or
Quadratic Assignment can be approximated nicely by Semidefinite Programs
(SDP). These relaxations can be further improved by including combinatorial
cutting planes, resulting in huge SDPs.

We use the bundle method from nonsmooth optimization to tackle these
SDPs. The idea consists in identifying a basic model, and taking the
Lagrange dual of the remaining constraints.

We discuss some practical issues like finding good primal solutions,
and identifying important cutting planes.

Finally, we present computational results of this approach applied to
the max-cut relaxation given by the basic SDP model intersected with
the metric polytope, and SDP relaxations of the Quadratic Assignment
Problem.

Michael Todd, School of Operations Research, Cornell University

Detecting infeasibility in interior-point methods for optimization.

Interior-point methods are both theoretically and practically efficient
algorithms for solving linear programming problems and certain convex
extensions. However, they seem to be less flexible than the well-known
simplex method: in particular, they do not deal as gracefully with infeasible
or unbounded (dual infeasible) problems.
One elegant way to produce either an optimal solution or a certificate
of infeasibility is to embed the original problem in a larger homogeneous
self-dual problem, but this seems less efficient computationally. Most
implementations use so-called infeasible-interior-point methods, which
focus exclusively on attaining feasibility and optimality. However,
such methods are surprisingly successful in detecting infeasibility.
We show that, on infeasible problems, they can be viewed as implicitly
searching for an infeasibility certificate.

I will discuss homogeneous convex cones. We first characterize the
extreme rays of such cones in the context of their primal
construction (due to Vinberg) and also in the context of their dual
construction (due to Rothaus). Then, using these results,
we prove that every homogeneous cone is facially exposed. We provide
an alternative proof of a result of G\"uler and Tun\c{c}el that
the Siegel rank of a symmetric cone is equal to its Carath\'eodory number.
Our proof does not use the Jordan-vonNeumann-Wigner
characterization of the symmetric cones but it easily follows from the
primal construction of the homogeneous cones and
our results on the geometry of homogeneous cones in primal and dual
forms. We briefly discuss the duality mapping in the context
of automorphisms of convex cones and prove that the duality mapping
is not an involution on certain self-dual cones.

This is based on joint work with Van Anh Truong.

Henry Wolkowicz, University of Waterloo, Department of Combinatorics
and Optimization,
and Charles Fortin McGill University, Department of Mathematics and
Statistics,

A Survey of the Trust Region Subproblem Within a Semidefinite Programming
Framework

The trust region subproblem (the minimization of a quadraticobjective
subject to one quadratic constraint and denoted TRS) has many applications
in diverse areas, e.g. function minimization,sequential quadratic programming,regularization,
ridge regression, and discrete optimization.
In particular, it is used to determine the step intrust region algorithms
for function minimization. Trust region algorithmsare popular due to
their strong convergence properties. However, one ofthe drawbacks has
been the inability to exploit sparsity as well as the difficulty in
dealing with the so-called hard case. These concerns have been addressed
by recent advances in the theory andalgorithmic development.

This talk provides an in depth study of TRS and its propertiesas well
as a survey of recent advances. This is done using semidefiniteprogramming
(SDP) and the modern primal-dualapproaches as a unifying framework.
The SDP framework solves TRS efficiently and robustly; and itshows that
TRS is always a well-posedproblem, i.e. the optimal value and an optimum
can be calculated to agiven tolerance. This is contrary to statements
in the literature which label TRS ill-posed or degenerate, if the so-called
hardcase holds. We provide both theoretical and empiricalevidence to
illustrate the strength of the SDP and duality approach. In particular,
this includes new insights into handling the hard case.

Yin Zhang, Department of Computational and Applied Mathematics,
Rice University

We introduce a global optimization problem in computational biology:
the Molecular Replacement Problem, which is a critical step in determining
protein structures through X-ray crystallography. We will report our
approaches and results in helping solve this interesting problem.