I am not an optimizer by training. My road to optimization went through convex analysis. I started with variational methods for inverse problems and mathematical imaging with the goal to derive properties of minimizers of convex functions. Hence, I studied a lot of convex analysis. Later I got interested in how to actually solve convex optimization problems and started to read books about (convex) optimization. At first I was always distracted by the way optimizers treated constraints. To me, a convex optimization problem always looks like

Everything can be packed into the convex objective. If you have a convex objective and a constraint with a convex function , just take , i.e., add the indicator function of the constraint to the objective (for some strange reason, Wikipedia has the name and notation for indicator and characteristic function the other way round than I, and many others…). . Similarly for multiple constraints or linear equality constraints and such.

In this simple world it is particularly easy to characterize all solutions of convex minimization problems: They are just those for which

Simple as that. Only take the subgradient of the objective and that’s it.

When reading the optimization books and seeing how difficult the treatment of constraints is there, I was especially puzzled how complicated optimality conditions such as KKT looked like in contrast to and also and by the notion of constraint qualifications.

These constraint qualifications are additional assumptions that are needed to ensure that a minimizer fulfills the KKT-conditions. For example, if one has constraints then the linear independence constraint qualification (LICQ) states that all the gradients for constraints that are “active” (i.e. ) have to be linearly independent.

It took me while to realize that there is a similar issue in my simple “convex analysis view” on optimization: When passing from the gradient of a function to the subgradient, many things stay as they are. But not everything. One thing that does change is the simple sum-rule. If and are differentiable, then , always. That’s not true for subgradients! You always have that . The reverse inclusion is not always true but holds, e.g., if there is some point for which is finite and is continuous. At first glance this sounds like a very weak assumption. But in fact, this is precisely in the spirit of constraint qualifications!

Take two constraints and with convex and differentiable . We can express these by (). Then it is equivalent to write

and

So characterizing solution to either of these is just saying that . Oh, there we are: Are we allowed to pull the subgradient apart? We need to apply the sum rule twice and at some point we need that there is a point at which is finite and the other one is continuous (or vice versa)! But an indicator function is only continuous in the interior of the set where it is finite. So the simplest form of the sum rule only holds in the case where only one of two constraints is active! Actually, the sum rule holds in many more cases but it is not always simple to find out if it really holds for some particular case.

So, constraint qualifications are indeed similar to rules that guarantee that a sum rule for subgradients holds.

Geometrically speaking, both shall guarantee that if one “looks at the constraints individually” one still can see what is going on at points of optimality. It may well be that the sum of individual subgradients is too small to get any points with but still there are solutions to the optimization problem!

As a very simple illustration take the constraints and in two dimensions. The first constraint says “be in the lower half-plane” while the second says “be above the parabola ”. Now take the point which is on the boundary for both sets. It’s simple to see (geometrically and algebraically) that and , so treating the constraints individually gives . But the full story is that , thus and consequently, the subgradient is much bigger.

Share this:

Like this:

The Douglas-Rachford method is a method to solve a monotone inclusion with two maximally monotone operators defined on a Hilbert space . The method uses the resolvents and and produces two sequences of iterates

This is again a monotone inclusion, but now on . We introduce the positive definite operator

and perform the iteration

(This is basically the same as applying the proximal point method to the preconditioned inclusion

Writing out the iteration gives

Now, applying the Moreau identity for monotone operators (), gives

substituting finally gives Douglas-Rachford:

(besides the stepsize which we would get by starting with the equivalent inclusion in the first place).

Probably the shortest derivation of Douglas-Rachford I have seen. Oh, and also the (weak) convergence proof comes for free: It’s a proximal point iteration and you just use the result by Rockafellar from “Monotone operators and the proximal point algorithm”, SIAM J. Control and Optimization 14(5), 1976.

Like this:

Currently I am at the SIAM Imaging conference in Hong Kong. It’s a great conference with great people at a great place. I am pretty sure that this will be the only post from here, since the conference is quite intense. I just wanted to report on two ideas that have become clear here, although, they are both pretty easy and probably already widely known, but anyway:

1. Non-convex + convex objective

There are a lot of talks that deal with optimization problems of the form

Especially, people try to leverage as much structure of the functionals and as possible. Frequently, there arises a need to deal with non-convex parts of the objective, and indeed, there are several approaches around that deal in one way or another with non-convexity of or even . Usually, in the presence of an that is not convex, it is helpful if has favorable properties, e.g. that still is bounded from below, coercive or even convex again. A particularly helpful property is strong convexity of (i.e. stays convex even if you subtract from it). Here comes the simple idea: If you already allow to be non-convex, but only have a that is merely convex, but not strongly so, you can modify your objective to

for some . This will give you strong convexity of and an that is (often) theoretically no worse than it used to be. It appeared to me that this is an idea that Kristian Bredies told me already almost ten years ago and which me made into a paper (together with Peter Maaß) in 2005 which got somehow delayed and published no earlier than 2009.

2. Convex-concave saddle point problems

If your problem has the form

with some linear operator and both and are convex, it has turned out, that it is tremendously helpful for the solution to consider the corresponding saddle point formulation: I.e. using the convex conjugate of , you write

A class of algorithms, that looks like to Arrow-Hurwicz-method at first glance, has been sparked be the method proposed by Chambolle and Pock. This method allows and to be merely convex (no smoothness or strong convexity needed) and only needs the proximal operators for both and . I also worked on algorithms for slightly more general problems, involving a reformulation of the saddle point problem as a monotone inclusion, with Tom Pock in the paper An accelerated forward-backward algorithm for monotone inclusions and I also should mention this nice approach by Bredies and Sun who consider another reformulation of the monotone inclusion. However, in the spirit of the first point, one should take advantage of all the available structure in the problem, e.g. smoothness of one of the terms. Some algorithm can exploit smoothness of either or and only need convexity of the other term. An idea, that has been used for some time already, to tackle the case if , say, is a sum of a smooth part and a non-smooth part (and is not smooth), is, to dualize the non-smooth part of : Say we have with smooth , then you could write

and you are back in business, if your method allows for sums of convex functions in the dual. The trick got the sticky name “dual transportation trick” in a talk by Marc Teboulle here and probably that will help, that I will not forget it from now on…

As clear from the titles, both papers treat a similar method. The first paper contains all the theory and the second one has few particularly interesting applications.

In the first paper we propose to view several known algorithms such as the linearized Bregman method, the Kaczmarz method or the Landweber method from a different angle from which they all are special cases of another algorithm. To start with, consider a linear system

Obviously, this is nothing else than a gradient descent for the functional and indeed converges to a minimizer of this functional (i.e. a least squares solution) if the stepsizes fulfill for some . If one initializes the method with it converges to the least squares solution with minimal norm, i.e. to (with the pseudo-inverse ).

A totally different method is even older: The Kaczmarz method. Denoting by the -th row of and the -th entry of the method reads as

where or any other “control sequence” that picks up every index infinitely often. This method also has a simple interpretation: Each equation describes a hyperplane in . The method does nothing else than projecting the iterates orthogonally onto the hyperplanes in an iterative manner. In the case that the system has a solution, the method converges to one, and if it is initialized with we have again convergence to the minimum norm solution .

There is yet another method that solves (but now it’s a bit more recent): The iteration produces two sequences of iterates

for some , the soft-thresholding function and some stepsize . For reasons I will not detail here, this is called the linearized Bregman method. It also converges to a solution of the system. The method is remarkably similar, but different from, the Landweber iteration (if the soft-thresholding function wouldn’t be there, both would be the same). It converges to the solution of that has the minimum value for the functional . Since this solution of close, and for large enough identical, to the minimum solution, the linearized Bregman method is a method for sparse reconstruction and applied in compressed sensing.

Now we put all three methods in a joint framework, and this is the framework of split feasibility problems (SFP). An SFP is a special case of a convex feasibility problems where one wants to find a point in the intersection of multiple simple convex sets. In an SFP one has two different kinds of convex constraints (which I will call “simple” and “difficult” in the following):

Constraints that just demand that for some convex sets . I call these constraints “simple” because we assume that the projection onto each is simple to obtain.

Constraints that demand for some matrices and simple convex sets . Although we assume that projections onto the are easy, these constraints are “difficult” because of the presence of the matrices .

If there were only simple constraints a very basic method to solve the problem is the methods of alternating projections, also known as POCS (projection onto convex sets): Simply project onto all the sets in an iterative manner. For difficult constraints, one can do the following: Construct a hyperplane that separates the current iterate from the set defined by the constraint and project onto the hyperplane. Since projections onto hyperplanes are simple and since the hyperplane separates we move closer to the constraint set and this is a reasonable step to take. One such separating hyperplane is given as follows: For compute (with the orthogonal projection ) and define

Now we already can unite the Landweber iteration and the Kaczmarz method as follows: Consider the system as a split feasibility problem in two different ways:

Treat as one single difficult constraint (i.e. set ). Some calculations show that the above proposed method leads to the Landweber iteration (with a special stepsize).

Treat as simple constraints . Again, some calculations show that this gives the Kaczmarz method.

Of course, one could also work “block-wise” and consider groups of equations as difficult constraints to obtain “block-Kaczmarz methods”.

Now comes the last twist: By adapting the term of “projection” one gets more methods. Particularly interesting is the notion of Bregman projections which comes from Bregman distances. I will not go into detail here, but Bregman distances are associated to convex functionals and by replacing “projection onto or hyperplanes” by respective Bregman projections, one gets another method for split feasibility problems. The two things I found remarkable:

The Bregman projection onto hyperplanes is pretty simple. To project some onto the hyperplane , one needs a subgradient (in fact an “admissible one” but for that detail see the paper) and then performs

( is the convex dual of ) with some appropriate stepsize (which is the solution of a one-dimensional convex minimization problem). Moreover, is a new admissible subgradient at .

If one has a problem with a constraint (formulated as an SFP in one way or another) the method converges to the minimum- solution of the equation if is strongly convex.

Note that strong convexity of implies differentiability of and Lipschitz continuity of and hence, the Bregman projection can indeed be carried out.

Now one already sees how this relates to the linearized Bregman method: Setting , a little calculation shows that

Hence, using the formulation with a “single difficult constraint” leads to the linearized Bregman method with a specific stepsize. It turns out that this stepsize is a pretty good one but also that one can show that a constant stepsize also works as long as it is positive and smaller that .

In the paper we present several examples how one can use the framework. I see one strengths of this approach that one can add convex constraints to a given problem without getting into any trouble with the algorithmic framework.

The second paper extends a remark that we make in the first one: If one applies the framework of the linearized Bregman method to the case in which one considers the system as simple (hyperplane-)constraints one obtains a sparse Kaczmarz solver. Indeed one can use the simple iteration

and will converge to the same sparse solution as the linearized Bregman method.

This method has a nice application to “online compressed sensing”: We illustrate this in the paper with an example from radio interferometry. There, large arrays of radio telescopes collect radio emissions from the sky. Each pair of telescopes lead to a single measurement of the Fourier transform of the quantity of interest. Hence, for telescopes, each measurement gives samples in the Fourier domain. In our example we used data from the Very Large Array telescope which has 27 telescopes leading to 351 Fourier samples. That’s not much, if one want a picture of the emission with several ten thousands of pixels. But the good thing is that the Earth rotates (that’s good for several reasons): When the Earth rotates relative to the sky, the sampling pattern also rotates. Hence, one waits a small amount of time and makes another measurement. Commonly, this is done until the earth has made a half rotation, i.e. one complete measurement takes 12 hours. With the “online compressed sensing” framework we proposed, one can start reconstructing the image as soon the first measurements have arrived. Interestingly, one observes the following behavior: If one monitors the residual of the equation, it goes down during iterations and jumps up when new measurements arrive. But from some point on, the residual stays small! This says that the new measurements do not contradict the previous ones and more interestingly this happened precisely when the reconstruction error dropped down such that “exact reconstruction” in the sense of compressed sensing has happened. In the example of radio interferometry, this happened after 2.5 hours!

Like this:

I recently updated my working hardware and now use a tablet pc for work (namely a Nexus 10). In consequence, I also updated the software I used to have things more synchronized across devices. For my RSS feeds I now use feedly and the gReader app. However, I was not that happy with the method to store and mark paper I found but found the sharing interfaces between the apps pretty handy. I adopted the workflow that when I see a paper that I want to remember I sent them to my Evernote account where I tag them. Then, from time to time I go over the papers I marked and have a more detailed look. If I think, they deserve to be kept for future reference, they get a small entry here. Here’s the first take with just two papers from the last weeks (there are more in my backlog…):

where , and are maximally monotone, also strongly monotone, is -coercive, are linear and bounded and denotes the parallel sum, i.e. . Also the proposed algorithm looked a bit like a monster. Then, on later pager, things became a bit more familiar. As an application, they considered the optimization problem

with convex , , ( strongly convex), convex with -Lipschitz gradient and as above. By noting that the parallel sum is related to the infimal convolution of convex functions, things became clearer. Also, the algorithm looks more familiar now (Algorithm 18 in the paper – I’m too lazy to write it down here). They have an analysis of the algorithms that allow to deduce convergence rates for the iterates (usually ) but I haven’t checked the details yet.

Both papers treat the recently proposed “total generalized variation” model (which is a somehow-but-not-really-higher-order generalization of total variation). The total variation of a function () is defined by duality

(Note that the demanded high regularity of the test functions is not essential here, as we take a supremum over all these functions under the only, but important, requirement that the functions are bounded. Test functions from would also do.)

Several possibilities for extensions and generalization of the total variation exist by somehow including higher order derivatives. The “total generalized variation” is a particular successful approach which reads as (now using two non-negative parameter which do a weighting):

To clarify some notation: are the symmetric matrices, is the negative adjoint of which is the differential operator that collects all partial derivatives up to the -th order in a -tensor. Moreover is some matrix norm (e.g. the Frobenius norm) and is some vector norm (e.g. the 2-norm).

Both papers investigate so called denoising problems with TGV penalty and discrepancy, i.e. minimization problems

for a given . Moreover, both papers treat the one dimensional case and investigate very special cases in which they calculate minimizers analytically. In one dimension the definition of becomes a little more familiar:

Some images of both papar are really similar: This one from Papafitsoros and Bredies

and this one from Pöschl and Scherzer

Although both paper have a very similar scopes it is worth to read both. The calculations are tedious but both paper try to make them accessible and try hard (and did a good job) to provide helpful illustrations. Curiously, the earlier paper cites the later one but not conversely…

This paper shows a nice duality which I haven’t been aware of, namely the one between the subgradient descent methods and conditional gradient methods. In fact the conditional gradient method which is treated is a generalization of the conditional gradient method which Kristian and I also proposed a while ago in the context of -minimization in the paper Iterated hard shrinkage for minimization problems with sparsity constraints: To minimize the sum

with a differentiable and a convex for which the subgradient of is easily invertible (or, put differently, for which you can minimize easily), perform the following iteration:

At iterate linearize but not and calculate a new point by

Choose a stepsize and set the next iterate as a convex combination of and

Note that for and indicator function

you obtain the conditional gradient method (also known as Frank-Wolfe method). While Kristian and I derived convergence with an asymptotic rate for the case of and strongly coercive, Francis uses the formulation the assumption that the dual of has a bounded effective domain (which say that has linear growth in all directions). With this assumption he obtains explicit constants and rates also for the primal-dual gap. It was great to see that eventually somebody really took the idea from the paper Iterated hard shrinkage for minimization problems with sparsity constraints (and does not think that we do heuristics for minimization…).

for functions and sets . One further classifies problems according to additional properties of and : If one speaks of unconstrained optimization, if is smooth one speaks of smooth optimization, if and are convex one speaks of convex optimization and so on.

1. Classification, goals and accuracy

Usually, optimization problems do not have a closed form solution. Consequently, optimization is not primarily concerned with calculating solutions to optimization problems, but with algorithms to solve them. However, having a convergent or terminating algorithm is not fully satisfactory without knowing an upper bound on the runtime. There are several concepts one can work with in this respect and one is the iteration complexity. Here, one gives an upper bound on the number of iterations (which are only allowed to use certain operations such as evaluations of the function , its gradient , its Hessian, solving linear systems of dimension , projecting onto , calculating halfspaces which contain , or others) to reach a certain accuracy. But also for the notion of accuracy there are several definitions:

For general problems one can of course desire to be within a certain distance to the optimal point , i.e. for the solution and a given point .

One could also demand that one wants to be at a point which has a function value close to the optimal one , i.e, . Note that for this and for the first point one could also desire relative accuracy.

For convex and unconstrained problems, one knowns that the inclusion (with the subgradient) characterizes the minimizers and hence, accuracy can be defined by desiring that .

It turns out that the first two definitions of accuracy are much to hard to obtain for general problems and even for smooth and unconstrained problems. The main issue is that for general functions one can not decide if a local minimizer is also a solution (i.e. a global minimizer) by only considering local quantities. Hence, one resorts to different notions of accuracy, e.g.

For a smooth, unconstrained problems aim at stationary points, i.e. find such that .

For smoothly constrained smooth problems aim at “approximately KKT-points” i.e. a point that satisfies the Karush-Kuhn-Tucker conditions approximately.

(There are adaptions to the nonsmooth case that are in the same spirit.) Hence, it would be more honest not write in these cases since this is often not really the problem one is interested in. However, people write “solve ” all the time even if they only want to find “approximately stationary points”.

2. The gradient method for smooth, unconstrainted optimization

Consider a smooth function (we’ll say more precisely how smooth in a minute). We make no assumption on convexity and hence, we are only interested in finding stationary points. From calculus in several dimensions it is known that is a direction of descent from the point , i.e. there is a value such that . Hence, it seems like moving into the direction of the negative gradient is a good idea. We arrive at what is known as gradient method:

Now let’s be more specific about the smoothness of . Of course we need that is differentiable and we also want the gradient to be continuous (to make the evaluation of stable). It turns out that some more smoothness makes the gradient method more efficient, namely we require that the gradient of is Lipschitz continuous with a known Lipschitz constant . The Lipschitz constant can be used to produce efficient stepsizes , namely, for one has the estimate

This inequality is really great because one can use telescoping to arrive at

with the optimal value (note that we do not need to know for the following). We immediately arrive at

That’s already a result on the iteration complexity! Among the first iterates there is one which has a gradient norm of order .

However, from here on it’s getting complicated: We can not say anything about the function values and about convergence of the iterates . And even for convex functions (which allow for more estimates from above and below) one needs some more effort to prove convergence of the functional values to the global minimal one.

But how about convergence of the iterates for the gradient method if convexity is not given? It turns out that this is a hard problem. As illustration, consider the continuous case, i.e. a trajectory of the dynamical system

(which is a continuous limit of the gradient method as the stepsize goes to zero). A physical intuition about this dynamical system in is as follows: The function describes a landscape and are the coordinates of an object. Now, if the landscape is slippery the object slides down the landscape and if we omit friction and inertia, the object will always slide in the direction of the negative gradient. Consider now a favorable situation: is smooth, bounded from below and the level sets are compact. What can one say about the trajectories of the ? Well, it seems clear that one will arrive at a local minimum after some time. But with a little imagination one can see that the trajectory of does not even has to be of finite length! To see this consider a landscape that is a kind of bowl-shaped valley with a path which goes down the hillside in a spiral way such that it winds around the minimum infinitely often. This situation seems somewhat pathological and one usually does not expect situation like this in practice. If you tried to prove convergence of the iterates of gradient or subgradient descent you may have noticed that one sometimes wonders why the proof turns out to be so complicated. The reason lies in the fact that such pathological functions are not excluded. But what functions should be excluded in order to avoid this pathological behavior without restricting to too simple functions?

The inequality shall be a way to turn a complexity estimate for the gradient of a function into a complexity estimate for the function values. Hence, one would like to control the difference in functional value by the gradient. One way to do so is the following:

Definition 1 Let be a real valued function and assume (without loss of generality) that has a unique minimum at and that . Then satisfies a Kurdyka-Łojasiewicz inequality if there exists a differentiable function on some interval with and such that

for all such that .

Informally, this definition ensures that that one can “reparameterize the range of the function such that the resulting function has a kink in the minimum and is steep around that minimum”. This definition is due to the above paper by Kurdyka from 1998. In fact it is a slight generalization of the Łowasiewicz inequality (which dates back to a note of Łojasiewicz from 1963) which states that there is some and some exponent such that in the above situation it holds that

To that that, take and evaluate the gradient to to obtain . This also makes clear that in the case the inequality is fulfilled, the gradient provides control over the function values.

The works of Łojasiewicz and Kurdyka show that a large class of functions fulfill the respective inequalities, e.g. piecewise analytic function and even a larger class (termed o-minimal structures) which I haven’t fully understood yet. Since the Kurdyka-Łojasiewicz inequality allows to turn estimates from into estimates of it plays a key role in the analysis of descent methods. It somehow explains, that one really never sees pathological behavior such as infinite minimization paths in practice. Lately there have been several works on further generalization of the Kurdyka-Łojasiewicz inequality to the non-smooth case, see e.g. Characterizations of Lojasiewicz inequalities: subgradient flows, talweg, convexity by Bolte, Daniilidis, Ley and Mazet Convergence of non-smooth descent methods using the Kurdyka-Łojasiewicz inequality by Noll (however, I do not try to give an overview over the latest developments here). Especially, here at the French-German-Polish Conference on Optimization which takes place these days in Krakow, the Kurdyka-Łojasiewicz inequality has popped up several times.