We study the uniqueness of minimal liftings of cut generating functions obtained from maximal lattice-free polytopes. We prove a basic invariance property of unique minimal liftings for general maximal lattice-free polytopes. This generalizes a previous result by Basu, Cornu\'ejols and K\"oppe for {\em simplicial} maximal lattice-free polytopes, thus completely settling this fundamental question about lifting. We also extend results from the same paper for minimal liftings in maximal lattice-free simplices to more general polytopes. These nontrivial generalizations require the use of deep theorems from discrete geometry and geometry of numbers, such as the Venkov-Alexandrov-McMullen theorem on translative tilings, and McMullen's characterization of zonotopes.

Maximal S-free sets are inclusion-maximal convex sets whose interior does not contain points of S. For various choices of S these sets can be used to generate cutting planes. In dimension d, if S the intersection of an integer lattice and an arbitrary convex set, then every maximal S-free sets has at most 2^d facets; this bound is sharp. This was proved in various special cases by Lov\'asz, Basu \& Conforti \& Cornu\'ejols \& Zambelli, Dey \& Wolsey, Fukasawa \& G\"unl\"uk and in the general setting by Dey \& Mor\'an. In this talk I would like to present a general framework for bounding the facet complexity of maximal S-free sets. Within this framework the result of Mor\'an can be derived very quickly. Also, with a bit more work, a natural generalization of the mentioned result to the mixed-integer situation can be derived.

Sparse cutting-planes are often the ones used in mixed-integer programing (MIP) solvers, since they help in solving the linear programs encountered during branch-and-bound more efficiently. However, how well can we approximate the integer hull by just using sparse cutting-planes? In order to understand this question better, given a polyope P (e.g. the integer hull of a MIP), let P^k be its best approximation using cuts with at most k non-zero coefficients. We consider d(P, P^k) = max_{x in P^k} min_{y in P} |x - y| as a measure of the quality of sparse cuts.
In our first result, we present general upper bounds on d(P, P^k) which depend on the number of vertices in the polytope and exhibits three phases as k increases. Our bounds imply that if P has polynomially many vertices, using half sparsity already approximates it very well. Second, we present a lower bound on d(P, P^k) for random polytopes that show that the upper bounds are quite tight. Third, we show that for a class of hard packing IPs, sparse cutting-planes do not approximate the integer hull well. Finally, we show that using sparse cutting-planes in extended formulations is at least as good as using them in the original polyhedron, and give an example where the former is actually much better.

For a given set X of integer points, we investigate the smallest number of facets of any polyhedron whose set of integer points is the set of integer points in the convex hull of X. This quantity, which we call the "relaxation complexity" of X, corresponds to the smallest number of linear inequalities of any integer program having X as the set of feasible solutions that gets along without auxiliary variables. We show that the use of auxiliary variables is essential for constructing polynomial size integer programming formulations in many relevant cases. In particular, we provide asymptotically tight exponential lower bounds on the relaxation complexity of the integer points of several well-known combinatorial polytopes, including the traveling salesman polytope and the spanning tree polytope.

We study the minimum number of constraints needed to formulate random instances of the maximum stable set problem via LPs (more precisely, linear extended formulations), in two distinct models.
In the uniform model, the constraints of the LP are not allowed to depend on the input graph, which should be encoded solely in the objective function.
There we prove a $2^{\Omega(n/\log n)}$ lower bound
with probability at least $1 - 2^{-2^n}$ for every LP that is exact for a randomly selected set of instances; each graph on at most $n$ vertices being selected independently with probability $p \geqslant 2^{- \binom{n/4}{2} + n}$.
In the non-uniform model, the constraints of the LP may depend on the input graph, but we allow weights on the vertices. The input graph is sampled according to the $G(n,p)$ model. There we obtain upper and lower bounds holding with high probability for various
ranges of $p$. The bounds are close as
there is only an essentially quadratic gap in the exponent.
Finally, we state a conjecture to close the gap.
This is joint work with Sebastian Pokutta and G\'abor Braun (Georgia Tech).

We provide a new framework for establishing strong lower bounds on the nonnegative rank of matrices by means of common information, a notion previously introduced in Wyner [1975]. Common information is a natural lower bound for the nonnegative rank of a matrix and by combining it with Hellinger distance estimations we can compute the (almost) exact common information of UDISJ partial matrix. We establish the robustness of this estimation under various perturbations of the UDISJ partial matrix, where rows and columns are randomly or adversarially removed or where entries are randomly or adversarially altered. This robustness translates, via a variant of Yannakakis’ Factorization Theorem, to lower bounds on the average case and adversarial approximate extension complexity.
(joint work with Gábor Braun)

The split delivery vehicle routing problem is a relaxation of the classical capacitated vehicle routing problem where the demand of a customer can be split and delivered using multiple vehicles. We try to solve this problem with a formulation using variables that do not carry a vehicle index. This formulation may have solutions where customer nodes act like depots at which several vehicles arrive and exchange loads. We try to avoid these solutions using a) cutting planes, b) extending the formulation locally with vehicle indexed variables and c) node splitting. We report some preliminary computational results.

To operate its schedule, an airline company builds sequences of flights for its airplanes while respecting maintenance constraints: each aircraft has to stay one night every $D$ days in a maintenance base. This problem is known as the aircraft routing problem in Operations Research.
There is no stable statement of the aircraft routing problem in the literature. For instance, up to the nineties, airline companies practiced the aircraft rotation problem, which enforced an additional constraint: each airplane had to cover each flight after a given period of time. If the complexity of the aircraft rotation problem has been proved (polynomial for D < 4 and NP-complete for D=4), we are not aware of a proof of the NP-completeness of aircraft routing, which is nonetheless considered as true in the literature. We give a formal model of aircraft routing as it has been practiced for fifteen years, we prove its NP-completeness for D>1, and its polynomiality for a fixed number of airplanes.
This theoretical work enables us to give a linear integer program for aircraft routing with fewer than 2nD variables and n(D+1) constraints, where n is the number of flights. Classical programs for aircraft routing use column generation techniques. We prove that our program has the same continuous relaxation as the one used in the column generation program. Numerical tests are actually performed to prove the effectiveness of this approach.

We prove a bound on the diameter of lattice polytopes that improves on a previous result by Kleinschmidt and Onn. Our result implies a bound on the diameter of half-integral polytopes, and we prove that such bound is tight. This is a joint work with Alberto Del Pia.

In the early 1970s, by work of Klee and Minty (1972) and Zadeh (1973), the Simplex Method, the Network Simplex Method, and the Successive Shortest Path Algorithm have been proved guilty of exponential worst-case behavior (for certain pivot rules). Since then, the common perception is that these algorithms can be fooled into investing senseless effort by ‘bad instances’ such as, e. g., Klee-Minty cubes. This talk promotes a more favorable stance towards the algorithms’ worst-case behavior. We argue that the exponential worst-case performance is not necessarily a senseless waste of time, but may rather be due to the algorithms performing meaningful operations and solving difficult problems on their way. Given one of the above algorithms as a black box, we show that using this black box, with polynomial overhead and a limited interface, we can solve any problem in NP. This also allows us to derive NP-hardness results for some related problems.

Parallel computation requires splitting a job among a set of processing units called workers. The computation is generally performed by a set of one or more master workers that split the workload into chunks and distribute them to a set of slave workers. To guarantee correctness and achieve a desirable balancing of the split, many schemes introduce a (possibly large) overhead due to communication / synchronization among the involved workers.
We propose a simple mechanism to avoid the communication issues of the approach above. In the new paradigm, called SelfSplit, each worker is able to autonomously determine, without any communication with the other workers, the job parts it has to process.
The above feature makes the scheme very suited for those applications where communication among workers is time consuming or unreliable. In particular, it allows for a simple yet effective parallelization of divide-and-conquer algorithms with a short input that produce a very large number of time-consuming job parts, as it happens, e.g., when an NP-hard problem is solved by an enumerative method.
Computational results will be reported.
(joint work with Michele Monaci and Domenico Salvagnin)

We discuss the variability in the performance of multiple runs of Mixed Integer Linear solvers, and we concentrate on the one deriving from the use of different optimal bases of the Linear Programming relaxations. We propose a possible way to exploit performance variability at the root node of a branch-and-cut algorithm, by performing multiple root cut loops in a parallel fashion, and we provide benchmark results to analyze the performance impact of the proposed approach in IBM ILOG CPLEX 12.5.1.

One of the essential components of a branch-and-bound based mixed-integer linear programming (MIP) solver is the branching rule. Strong branching is a method used by many state-of-the-art branching rules to select the variable to branch on. It precomputes the dual bounds of potential child nodes by solving auxiliary linear programs (LPs) and thereby helps to take good branching decisions that lead to a small search tree. In this talk, we describe how these dual bound predictions can be improved by including domain propagation into strong branching. Domain propagation is a technique MIP solvers usually apply at every node of the branch-and-bound tree to tighten the local domains of variables. Computational experiments on standard MIP instances indicate that our improved strong branching method significantly improves the quality of the predictions and causes almost no additional effort. For a full strong branching rule, we are able to obtain substantial reductions of the branch-and-bound tree size as well as the solving time. Moreover, the state-of-the-art hybrid branching rule can be improved this way as well.

We study the convex hull of certain nonlinear sets. For a partitioning polynomial set in three variables, we derive a complete (nonlinear) description of the convex hull in the space of original variables using lifting and projection. We then provide constructive derivations of some the inequalities arising in the convex hull description. This derivation points to a more general decomposition principle that can be used to obtain the convex hulls of various nonlinear sets.
Talk is based on a paper with T. Nguyen and M. Tawarmalani, and current work with M. Tawarmalani

In this talk, we will cover some of the recent developments in Mixed Integer Conic Programming. In particular, we will study nonlinear mixed integer sets involving a general regular (closed, convex, full dimensional, and pointed) cone K such as the nonnegative orthant, the Lorentz cone or the positive semidefinite cone, and introduce the class of K-minimal valid linear inequalities. Under mild assumptions, we will show that these inequalities together with the trivial cone-implied inequalities are sufficient to describe the convex hull. We study the characterization of K-minimal inequalities by identifying necessary, and sufficient conditions for an inequality to be K-minimal; and establish relations with the support functions of sets with certain structure, which leads to efficient ways of showing a given inequality is K-minimal.
This framework naturally generalizes the corresponding results for Mixed Integer Linear Programs (MILPs), which have received a lot of interest recently. In particular, our results recover that the minimal inequalities for MILPs are generated by sublinear (positively homogeneous, subadditive and convex) functions that are also piecewise linear. However our study also reveals that such a cut generating function view is not possible for the conic case even when the cone involved is the Lorentz cone.
Finally, we will conclude by introducing a new technique on deriving conic valid inequalities for sets involving a Lorentz cone via a disjunctive argument. This new technique also recovers a number of results from the recent literature on deriving split and disjunctive inequalities for the mixed integer sets involving a Lorentz cone.

The trust-region subproblem minimizes a general quadratic
function over an ellipsoid and can be solved in polynomial time
using, for example, a semidefinite-programming (SDP) relaxation.
However, intersecting the feasible set with a second ellipsoid---a
problem called the two-trust-region subproblem (TTRS)---makes the optimization considerably more difficult. For example, the analogous SDP relaxation admits a gap, and the computational complexity of TTRS is unknown. Recent research has tightened the SDP relaxation using valid second-order-cone inequalities, but closing the gap requires more. In this talk, for the special case of TTRS with two variables, we fully characterize the remaining valid inequalities, which include, for example, strengthened versions of the second-order-cone inequalities just mentioned. Our approach examines the global and local behavior of general quadratics over the intersection of two ellipsoids in two variables. We also discuss computational issues and generalizations to the case of an arbitrary number of variables.

In this talk we are interested in algorithms for finding 2-factors that cover certain prescribed edge-cuts in bridgeless cubic graphs.
We present an algorithm for finding a minimum-weight 2-factor covering all the 3-edge cuts in weighted bridgeless cubic graphs, together with a polyhedral description of such 2-factors and that of perfect matchings intersecting all the 3-edge cuts in exactly one edge.
We further give an algorithm for finding a 2-factor covering all the 3- and 4-edge cuts in bridgeless cubic graphs.
Both of these algorithms run in O(n^3) time, where n is the number of vertices.
As an application of the latter algorithm, we design a 6/5-approximation algorithm for finding a minimum 2-edge-connected subgraph in 3-edge-connected cubic graphs, which improves upon the previous best ratio of 5/4.
The algorithm begins with finding a 2-factor covering
all 3- and 4-edge cuts, which is the bottleneck in terms of complexity, and thus it has running time O(n^3). We then improve this time complexity toO(n^2 log^4 n) by relaxing the condition of the initial 2-factor and elaborating the subsequent processes.

A simple 2-matching in a simple undirected graph is a subgraph with node degree 1 or 2. A simple 2-matching is called 1-restricted if each of its connected components has at least two edges. We introduce two classes of valid inequalities for the 1-restricted simple 2-matching polytope: r-1 and r-2 blossom inequalities. We characterize the facet inducing r-1 and r-2 blossom inequalities on a class of 1-restricted blossoms, and characterize the facet inducing r-2 blossom inequalities with depth 2.

A reformulation of the (non-convex) European day-ahead electricity market problem with binary orders is given, which avoids complementarity constraints and also avoids the use of auxiliary binary variables to linearize these constraints. Instead of relaxing economically meaningful complementarity constraints that would hold in the well-behaved continuous context, the reformulation uses a relaxation of a dual program together with a condition of equality of objective functions. In this particular mixed integer setting, this reformulation shows how to use classical LP/QP duality results, in a way that is interesting both from the algorithmic and the economic modelling point of view. It also allows deriving a Benders decomposition procedure with locally strengthened cuts avoiding big-M’s introduced in the original reformulation, very useful in the quadratic programming setting where the equality of objective functions condition yields a dense quadratic constraint. Numeric results are presented to compare both the decomposition and the full model approaches, on both real and very combinatorial instances, where most orders are binary orders.

Mixed Integer Linear Programming (MILP) models are commonly used to model indicator constraints, which either hold or are relaxed depending on the value of a binary variable. Classification problems with Ramp Loss functions are an important application of such models. Mixed Integer Nonlinar Programming (MINLP) models are usually dismissed because they cannot be solved as efficiently. However, we show here that a subset of classification problems can be solved much more efficiently by a MINLP model with nonconvex constraints. This calls for a reconsideration of the modeling of these indicator constraints.

In the talk we discuss exact and approximation
algorithms for scheduling a single machine with additional non-renewable resource constraints.
We will discuss problems when jobs either produce some products to meet demands at given due-dates, or consume some non-renewable resources which are replenished at known time points. In the former problem, the objective is to minimize the maximum tardiness of meeting the due-dates, while for the latter problem to goal is to minimize the maximum job completion time.
We will show that the two problems are equivalent, present a fully polynomial time approximation scheme for a special case, and a polynomial time approximation scheme when the number of time periods (due-dates or replensihment dates) is fixed.
The PTAS uses some ideas of the PTAS of Chekuri and Khanna for solving the multiple knapsack problem.

We consider a single machine scheduling problem with generalized min-sum objective. Given some continuous, non-decreasing cost function, we aim at computing a schedule minimizing the total weighted completion time cost. This problem is closely related to scheduling a single machine with nonuniform processing speed. We show that for piecewise linear cost functions the problem is strongly NP-hard. Moreover, we give a tight analysis of the approximation guarantee of Smith's rule under any particular convex or concave cost function. More specifically, for these wide classes of cost functions we reduce the task of determining a worst case problem instance to a continuous optimization problem, which can be solved by standard algebraic or numerical methods. For monomial cost functions x^k, we show that the tight approximation factor is asymptotically equal to k^((k-1)/(k+1)). To overcome unrealistic worst case instances, we also give tight bounds for the case of integral processing times that are parametrized by the maximum and total processing time.

Two important characteristics encountered in many real-world scheduling problems are heterogeneous processors and a certain degree of uncertainty about the sizes of jobs. In this talk we address both, and study for the first time a scheduling problem that combines the classical unrelated machine scheduling model with stochastic processing times of jobs. By means of a novel time indexed linear programming relaxation, we compute in polynomial time a scheduling policy with performance guarantee (3+D)/2+e. Here, e>0 is arbitrarily small, and D is an upper bound on the squared coefficient of variation of the processing times. When jobs also have individual release dates, our bound is 2+D+e. We also show that the dependence of the performance guarantees on D is tight. Interestingly, via D=0 the currently best known bounds for deterministic scheduling on unrelated machines are contained as special case.
(Joint work with M Skutella and M Sviridenko)

The concept of cut-generating function has its origin in the work of Gomory and Johnson from the 1970s. It has received renewed attention in the past few years. Recently Conforti, Cornuejols, Daniilidis, Lemarechal, and Malick proposed a general framework for studying cut-generating functions. However, they gave an example showing that not all cuts can be produced by cut-generating functions in this framework. They conjectured a natural condition under which cut-generating functions might be sufficient. Our work settles this open problem.

In an instance of the classical, cooperative matching game
introduced by Shapley and Shubik [Int. J. Game Theory '71] we are
given an undirected graph $G=(V,E)$, and we define the value $\nu(S)$ of each subset $S \subseteq V$ as the cardinality of a maximum matching in the subgraph $G[S]$ induced by $S$. The core of such a game contains all "fair" allocations of $\nu(V)$ among
the players of $V$, and is well-known to be non-empty iff graph $G$ is "stable". $G$ is stable if its inessential vertices (those that
are exposed by at least one maximum matching) form a stable set.
In this talk we study the following natural edge-deletion question:
given a graph $G=(V,E)$, can we find a minimum-cardinality "stabilizer"? I.e., can we find a set $F$ of edges whose removal
from $G$ yields a stable graph?
We show that this problem is vertex-cover hard. We then prove that
there is a minimum-cardinality stabilizer that avoids some
maximum-matching of $G$. We employ this insight to give efficient
approximation algorithms for sparse graphs, and for regular graphs.

In this talk, we give an overview on our recent work on the multiobjective linear programming problem (MOLP) which is motivated by the fact that in the MOLP literature we rarely find theoretical performance guarantees of proposed algorithms. From a paper by Khachiyan, Boros, Borys, Elbassioni and Gurvich (2008), we see that if P is not equal to NP then we cannot enumerate all nondominated basic feasible solutions of an MOLP in polynomial total time. Instead, we suggest a model which got its motivation from the multicriteria decision making community: The goal is to find for each nondominated extreme point y in the objective space one solution x, such that Cx = y.
Thus, we analyze the running times of a popular MOLP algorithm, called Benson’s “outer approximation algorithm”, and of a recently discovered geometric dual variant. We show that both algorithms run in polynomial total time for a fixed but arbitrary number of objectives in this model. Additionally, we suggest improvements for both algorithms removing an exponential running time burden from them and revealing the individual advantages of both algorithms.
Moreover, we show that there is an application of the dual algorithm in solving multiobjective combinatorial optimization problems. Using this algorithm, we prove that enumerating all nondominated extreme value vectors can be done in polynomial total time if the number of objectives is fixed and the single objective optimization problem can be solved in polynomial time. Additionally, using our improvements and under the slight restriction of solving the lexicographic variant of the problem, we can further improve the running time.
This enables us to present---to the best of our knowledge---the first computational study for enumerating the extreme nondominated value vectors of the multiobjective assignment and spanning tree problem with more than four objectives. The results show that this algorithm is very competitive compared to the small number of algorithms available to solve this problem for more than two objectives.

We study a new robust path problem, the Online Replacement Path problem (ORP).
Consider the problem of routing a physical package through a faulty network $G=(V,E)$ from a source $s\in V$ to a destination $t\in V$ as quickly as possible. An adversary, whose objective is to maximize the latter routing time, can choose to remove a single edge in the network.
In one setup, the identity of the edge is revealed to the routing mechanism (RM) while the package is in $s$.
In this setup the best strategy is to route the package along the shortest path in the remaining network. The payoff maximization problem for the adversary becomes the Most Vital Arc problem (MVA), which amounts to choosing the edge in the network whose removal results in a maximal increase of the $s$-$t$ distance. However, the assumption that the RM is informed about the failed edge when standing at $s$ is unrealistic in many applications, in which failures occur online, and, in particular, after the routing has started.
We therefore consider the setup in which the adversary can reveal the identity of the failed edge just before the RM attempts to use this edge, thus forcing it to use a different route to $t$, starting from the current node. The problem of choosing the nominal path
minimizing the worst case arrival time at $t$ in this setup is ORP.
We show that ORP can be solved in polynomial time and study other models naturally providing middle grounds between MVA and ORP. Our results show that ORP comprises a highly flexible and tractable framework for dealing with robustness issues in the design of RM-s.

The max stable set problem is a fascinating problem that raises important challenges for cutting planes algorithms and brings together different aspects of combinatorial optimization.
In the past 40 years, several classes of valid inequalities have been studied. Most of these inequalities are a special type of the large family of rank inequalities, and they are defined over graphs with a known structure, e.g., cliques, odd-holes, anti odd-holes, and the more general webs.
To the best of our knowledge, the only computational attempt to separate the general family of rank inequalities is based on the notion of edge projection introduced by Mannino and Sassano (1996), which was heuristically exploited in a branch-and-cut solver by Rossi and Smriglio (2001).
In this work, we propose two separation procedures for rank inequalities, both based on formulating and solving the separation problem as a mixed integer program. The first procedure reduces the separation problem to the solution of a finite sequence of MIP problems. The second procedure formulates the separation problem as a bi-level program. In this talk, we discuss the properties of the two separation procedures.
Preliminary computational results on small graphs show that rank-inequalities can provide tight bounds.

We study the node-deletion problem consisting of finding a maximum
weighted induced bipartite subgraph of a planar graph with maximum degree
three. We show that this is polynomially solvable. It was
shown in \cite{Choi} that it is NP-complete if the maximum degree
is four. We also extend these ideas to the problem of balancing
signed graphs.
We also consider maximum weighted induced acyclic subgraphs of
planar directed graphs. If the maximum degree is three, it is easily
shown that this is polynomially solvable. We show that for planar
graphs with maximum degree four it is NP-complete.

The Multiple Spanning Tree Protocol (MTSP), used in Ethernet networks, maintains a set of spanning trees that are used for routing the demands in the network. Each spanning tree is allocated to a pre-defined set of demands.
We have developed two mixed integer programming models for the Traffic Engineering problem of optimally designing a network implementing the Multiple Spanning Tree Protocol, such that link utilization is minimized. We present tests in order to compare the two formulations, in terms of formulation strength and computing time.
We also propose a binary search algorithm that has proven to be efficient in obtaining quasi-optimal solutions for this problem.

The Steiner connectivity problem is to connect a set of terminal nodes in a graph by a cost minimal set of paths; it generalizes the Steiner tree problem to hypergraphs. The problem is known to be approximable within a factor of log k if all nodes are terminals. We discuss its approximability if all paths
contain at most k edges and provide, in particular, a k+1 approximation if all paths contain at most k terminals. The two terminal case gives rise to a TDI description; this yields a combinatorial companion theorem to Menger's theorem for hypergraphs and characterizes paths and cuts in hypergraphs as a
blocking pair.

We study the minimum rectilinear Steiner tree problem in the
presence of obstacles. Traversing obstacles is not strictly
forbidden, but the total length of each connected component in the
intersection of the tree with the interior of the blocked area is
bounded by a constant.
This problem is motivated by the layout of repeater tree topologies, a
central task in chip design. Large blockages might be crossed by wires
on higher layers, but repeaters may not be placed within the blocked
area. A too long unbuffered piece of interconnect would lead to timing
violations.
We present a 2-approximation algorithm with a worst case running time
of O((k log k)^2), where k is the number of terminals plus the number
of obstacle corner points. Under mild assumptions on the obstacle
structure, as they are prevalent in chip design, the running time is O(k (log k)^2),
which makes it the first applicable 2-approximation algorithm for this problem,
solving instances with up to 783 352 terminals within 102 seconds.
The instances we used have been made public as the "BONN" instances in
the benchmark set of the upcoming 11th DIMACS challenge on Steiner
trees.
This is joint work with Sophie Spirkl

Heuristics are crucial to MINLP solvers; more so than for MILP solvers, since they help reduce variable bounds, thus improving the LP relaxation of a nonlinear problem.
We present a variant of the Feasibility Pump (FP) for nonconvex MINLP problems on factorable functions, where the full symbolic information is known. Unlike most FP heuristics, our variant uses a valid relaxation of the nonconvex problem; hence, it can potentially provide better solutions than those based on convex MINLP solvers.
Other features include the use of second-order information at a NLP solution and a pool of promising (but infeasible) solutions used for restarting the heuristic. We also present computational results on instances from common MINLP libraries.

Quadratic optimization over binary variables is NP-hard even in the unconstrained case. A common approach is to linearize each product in the objective function independently and simply combine the result with the given linear side constraints. This approach yields a correct integer programming model of the problem, but the resulting LP-relaxations lead to very weak bounds in general, so that branch-and-cut algorithms based on this simple linearization idea perform very poorly in general.
For this reason, we examine the problem version with only one product term in the objective function, but with all linear side constraints taken into account. We concentrate on applications where the underlying linear problem is tractable since in this case, the optimization problem with one quadratic term is still tractable. We apply this idea to three problems, the quadratic minimum spanning tree, the quadratic assignment and the quadratic matching problem, and present several new facet classes with efficient separation algorithms for the corresponding problems with one quadratic term. Computational results for the quadratic minimum spanning tree problem show significant improvements over the standard linearization.

We study a convex integer minimization problem in variable dimension. Given an integer matrix W with n columns and d rows, a convex function f over R^d and a polyhedron P in R^n, we would like to minimize f(Wx) subject to x in P, integer. Assuming d to be fixed, and W to be given in unary representation, we study classes of polyhedra P, for which the latter problem can be solved in polynomial time.

Let $\S_1,\ldots,\S_k$ be $k$ sets of points in $\Q^d$. The {\em colorful linear programming problem}, defined by \bara{} and Onn ({\em Mathematics of Operations Research}, {\bf 22} (1997), 550--567), aims at deciding whether there exists a $T\subseteq\bigcup_{i=1}^ k\S_i$ such that $|T\cap\S_i|\leq 1$ for $i=1,\ldots,k$ and $\zero\in\conv(T)$. They proved in their paper that this problem is NP-complete when $k=d$. They leave as an open question the complexity status of the problem when $k=d+1$. Contrary to the case $k=d$, this latter case still makes sense when the points are in a generic position.
We solve the question by proving that this case is also NP-complete. The proof is inspired by the proof of the NP-completeness of the linear complementarity problem and uses some relationships between colorful linear programming and complementarity problems that we explicit in this paper. We also show that if \textup{P=NP}, then there is an easy polynomial-time algorithm computing Nash equilibrium in bimatrix games using any polynomial-time algorithm solving the case with $k=d+1$ and $|\S_i|\leq 2$ for $i=1,\ldots,d+1$ as a subroutine. On our track, we found a new way to prove that a complementarity problem belongs to the PPAD class with the help of Sperner's lemma. We also show that we can adapt algorithms proposed by \bara{} and Onn for computing a feasible solution $T$ in a special case and get what can be interpreted as a ``Phase I'' simplex method, without any projection or distance computation.

The discrete ordered median location model is a powerful tool in modeling
classic and alternative location problems that has been applied with success
to a large variety of discrete location problems. Nevertheless, although hub
location models have been analyzed from the sum, maximum and coverage
point of views, as far as we know, they have never been considered under
an alternative unifying point of view. In this paper we consider new formulations,
based on the ordered median objective function, for hub location
problems with new distribution patterns induced by the different users' roles
within the supply chain network. This approach introduces some penalty
factors associated with the position of an allocation cost with respect to the
sorted sequence of these costs. First we present basic formulations for this
problem, and then develop stronger formulations by exploiting properties of
the model. The performance of all these formulations is compared by means
of a computational analysis.

The best performing exact algorithms for the Capacitated Vehicle Routing Problem developed in the last 10 years are based in the combination of cut and column generation. Some authors only used cuts expressed over the variables of the original formulation, in order to keep the pricing subproblem relatively easy. Other authors could reduce the duality gaps by also using a restricted number of cuts over the Master LP variables, stopping when the pricing becomes prohibitively hard. A particularly effective family of such cuts are the Subset Row Cuts. This work introduces a technique for greatly reducing this impact on the pricing of these cuts, thus allowing much more cuts to be added. The newly proposed Branch-Cut-and-Price algorithm also incorporates and combines for the first time (often in an improved way) several elements found in previous works, like route enumeration and strong branching. All the instances used for benchmarking exact algorithms, with up to 199 customers, were solved to optimality. Moreover, larger instances with up to 360 customers, only considered before by heuristic methods, were solved too.

The target visitation problem (TVP) is concerned with finding a route to visit a set of targets starting from and returning to some base. In addition to the distance traveled, a tour is evaluated also by taking preferences into account addressing the sequence in which the targets are visited. The problem thus is a combination of two well-known combinatorial optimization problems: the traveling salesman and the linear ordering problem. The TVP was introduced to serve the planning of routes for unmanned aerial vehicles (UAVs) and it can be employed to model several kinds of routing problems with additional restrictions. In this talk we want to point out some properties of the polyhedral structure of an associated polytope and also present an extended formulation. We will use this formulation to develop a algorithm. Computational results will be discussed.

A. Huber and V.Kolmogorov (2012) introduced a concept of $k$-submodular function as a generalization of ordinary submodular (set) functions and bisubmodular functions and obtained a min-max theorem for minimization of $k$-submodular functions.
Also F. Kuivinen (2011) considered submodular functions on (product lattices of) diamonds and showed a min-max theorem for minimization of submodular functions on diamonds.
In the present paper we consider a common generalization of $k$-submodular functions and submodular functions on diamonds, which we call a transversal submodular function (or a t-submodular function, for short).
We show a min-max theorem for minimization of t-submodular functions in terms of a new norm composed of $\ell_1$ and $\ell_\infty$ norms.
This reveals a relationship between the obtained min-max theorem and that for minimization of ordinary submodular set functions due to J. Edmonds (1970).
We also show how our min-max theorem for t-submodular functions can be used to prove the min-max theorem for $k$-submodular functions by Huber and Kolmogorov and that for submodular functions on diamonds by Kuivinen.
(Joint work with Shin-ichi Tanigawa (RIMS, Kyoto Univ.))

We study a class of two-stage stochastic integer programs with general integer variables in both stages and finitely many realizations of the uncertain parameters. We propose decomposition algorithms, based on Benders method, that utilize Gomory cuts in both stages. The Gomory cuts for the second-stage scenario subproblems are parameterized by the first-stage decision variables. We prove the finite convergence of the proposed algorithms and report our computations that illustrate their effectiveness.

With stochastic integer programming as the motivating application,
we investigate techniques to use integrality information to obtain
improved cuts within a Benders decomposition algorithm. We consider
two options: (i) cut-and-project, where integrality information is
used to derive cuts in the extended variable space, and Benders cuts are then used to project the resulting improved relaxation, and (ii) project-and-cut, where integrality information is used to derive cuts directly in the projected space dened by Benders cuts. We analyze the use of split cuts in these two approaches, and demonstrate that although they yield equivalent relaxations when considering a single split, cut-and-project yields stronger relaxations in general when using multiple splits. Computational results illustrate that the difference can be very large, and demonstrate that using split cuts within the cut-and-project framework can significantly outperform other general purpose methods.

We solve a basic network design problem: Find minimum costs integer capacities such that all demand vectors from an uncertainty set can be routed through the network with a single commodity flow. We consider the case where the uncertainty set is a certain polytope and tackle the problem with linear programming methods. In particular, we show how to find elements of the uncertainty set that cannot be routed with a given capacity installation and how to separate cut-set inequalities in our setting.

We address the solution of a very challenging (and previously unsolved) instance of the quadratic 3-dimensional assignment problem, arising in digital wireless communications. The paper describes the techniques developed to solve this instance to optimality, from the choice of an appropriate mixed-integer programming formulation, to cutting planes and symmetry handling. Using these techniques we were able to solve the target instance with moderate computational effort (2.5 million nodes and one week of computations on a standard PC).

Discrete Newton Algorithm has a good track record for solving various network-type problems. We extend this record towards some problems where budget may be used to reduce the arc capacities: (1) Network Interdiction; (2) Reverse Network Interdiction; (3) Max Robust Flow; and (4) Parametric Scheduling. In cases (1), (2) we get more combinatorial approximation algorithms and faster running times than previously known, in (3) we show hardness and provide an approximation algorithm, in (4) we get new results for what was an open problem.