Advances in Algorithms and Hardware Help Speed Optimization

The relationship between algorithmic developments and hardware
improvements has led to the immense speed-up in optimization codes over the
last few decades. There has long been a debate over which improvement has done
the most for forward progress. Hardware developments seem to have taken two
main routes: the supercomputer and at the lower end, the multi-processor,
multicore chip.

The supercomputer has become even more “super”, but the
special algorithms once developed for vector processing are no longer used.
However, we should not ignore the role played by algorithm development in the
recent past. Without it, the vector and parallel supercomputers could have
conferred almost no advantage on linear programming (simplex) based methods. The
one bright spot was in the implementation of “embarrassingly parallel”
algorithms for solving discrete problems, such as mixed integer programs, by
branch and bound or its relatives, often achieving super-linear speed-up with
the number of processors.

The situation at the low-end has been different. It is now
possible to solve some test LPs in a few seconds, which once took over an hour
on a main frame. Many of the algorithmic advances in sparse matrix methods were
published decades ago, but their efficiency has been dramatically improved by
rules such as “never touch a zero”, and employing computer science concepts
such as heaps. However, hardware has played a prominent role, and not just
through Moore’s law. The basic laptop or desktop now has multiple processors,
each often having multiple cores, in addition to multi-level caches, plus the
speed-ups that occur over time. Thus hardware seems to be currently winning the
speed race. Even so, studies suggest that over time, say 20 years, the
speed-ups have been quite evenly divided between mathematics/algorithms and
hardware improvements.

What of the future? It is difficult to foresee fundamental
new algorithmic developments in basic LP and sparse matrix methods, but some
areas of optimization which were under-developed are beginning to come into
their own. One of these is non-linear programming (NLP), which after years in
the shadows (computationally, that is – it is prominent in the literature) is
starting to become ready for prime time. Some hardware developments – more
cores for example - are easily foreseeable, but will there by new
breakthroughs, and the algorithms to exploit them? Time will tell.