Abstract

The purpose of this paper is extending the convergence analysis of Han and Yuan (2012) for
alternating direction method of multipliers (ADMM) from the strongly convex to a more
general case. Under the assumption that the individual functions are composites of strongly
convex functions and linear functions, we prove that the classical ADMM for separable convex
programming with two blocks can be extended to the case with more than three blocks. The
problems, although still very special, arise naturally from some important applications,
for example, route-based traffic assignment problems.

1. Introduction

In this paper, we consider the convex programming with separable functions:
where () are closed proper convex functions (not necessarily smooth); (); () are closed convex sets; and . Throughout the paper, we assume that the solution set of (1) is nonempty.

For the special case of (1) with ,
the problem has been studied extensively. Among lots of numerical methods, one of the most popular methods is the alternating direction method of multipliers (ADMM) which was presented originally in [1, 2]. The iterative scheme of ADMM for (2) is as follows:
where is Lagrange multiplier associated with the linear constraints and is the penalty parameter. The convergence of ADMM for (2) was also established under the condition that the involved functions are convex and the constrained sets are convex too.

While there are diversified applications whose objective function is separable into individual convex functions without coupled variables, such as traffic problems, the problem of recovering the low-rank, sparse components of matrices from incomplete and noisy observation in [3], the constrained total-variation image restoration and reconstruction problem in [4, 5], and the minimal surface PDE problem in [6], it is thus natural to extend ADMM from blocks to blocks, resulting in the iterative scheme:

Unfortunately, the convergence of the natural extension is still open under convex assumption, and the recent convergence results [7] are under the assumption that all the functions involved in the objective functions are strongly convex. This lack of convergence has inspired some ADM-based methods, for example, prediction-correction type method [3, 8–11], that is, the iterate is regarded as a prediction, and the next iterate is a correction for it. However, the numerical results show that the algorithm (4) always performs better than these variants. Recently, Han and Yuan [7] show that the global convergence of the extension of ADMM for is valid if the involved functions are further assumed to be strongly convex. This result does not answer the open problem regarding the convergence of the extension of ADMM under the convex assumption, but it makes a key progress towards this objective.

In this paper, we consider the separable convex optimization problem (1) where each individual function is the combination of a strongly convex function and a linear transform . That is, (1) takes the following form:
where () are closed proper strongly convex function with the modulus (not necessarily smooth); (); () are closed convex sets; and ; (), where may not have full column rank (if has full column rank, the composite function is strongly convex and reduces to the case considered in [7]). Note that although (5) is very special, it arises frequently from many applications. One example is under the route-based traffic assignment problem [12], where is the link traffic cost, is the link-path incidence matrix, and is the path follow vector.

In the following, we abuse a little the notation and still write with ; that is, the problem under consideration is
where () are closed proper strongly convex function with the modulus (not necessarily smooth).

The rest of the paper is organized as follows. In the next section, we list some necessary preliminary results that will be used in the rest of the paper. We then describe the algorithm formally and analyze its global convergence under reasonable conditions in Section 3. We complete the paper with some conclusions in Section 4.

2. Preliminaries

In this section, we summarize some basic concepts and their properties that will be useful for further discussion.

Let denote the standard definition of the -norm, and particularly, let denote the Euclidean norm. For a symmetric and positive definite matrix , we denote the -norm, that is, . If is the product of a positive parameter and the identity matrix , that is, , we use the simpler notation: .

Let . If the domain of denoted by is not empty, then is said to be proper. If for any and , we have
then is said to be convex. Furthermore, is said to be strongly convex with the modulus if and only if

A set-valued operator defined on is said to be monotone if and only if
and is said to be strongly monotone with modulus if and only if

Let denote the set of closed proper convex functions from to . For any , the subdifferential of which is the set-valued operator, defined by
is monotone. Moreover, if is strongly convex function with the modulus , is strongly monotone with the modulus .

Let be a mapping from a set . Then is said to be co-coercive on with modulus , if

Remark 2. Assumption 1 is a little restrictive. However, some problems can satisfy it. A remarkable one is the following route-based traffic assignment problem.Consider a transportation network , where is the set of nodes. We denote the set of links by , and the number of the element of by , respectively. Let denote the set of origin-destination (O-D) pairs. For an O-D pair , let be its traffic demand; let be the set of routes connecting , and ; denotes the number of the routes connecting ; let be the route flow on . The feasible route flow vector is thus given by
Define as the link-route incidence matrix such that
Then, link flow can be written as
By denoting the link cost function as and for the additive case, the route cost function as , they can be related by
The user equilibrium traffic assignment problem can be formulated as a VI: find such that
or equivalently, find such that
where is the vector of the link cost function.In general, it is easy to show that is a row of and is not a full column rank (if is, then the above variational inequality is strongly monotone).For simplicity, in the following, we only consider the case for . Notice that for , it can be proved similarly following the processing of .

3. The Method

In this section, we consider the following convex minimization problem with linear constraint, where the objective function is in the form of the sum of three individual functions without coupled variable:
where () are closed proper strongly convex function with the modulus (not necessarily smooth); (), (); () are closed convex sets; and .

The iterative scheme of ADMM for problem (19) is as follows:
where is the Lagrangian multiplier associated with the linear constraints and is the penalty parameter.

4. Convergence

In this section, we prove the convergence of the extended ADMM for problem (19). As the assumptions aforementioned, by invoking the first-order necessary and sufficient condition for convex programming, we easily see that the problem (19) under the condition is characterized by the following variational inequality (VI): find and such that
where
We denote the VI (21)-(22) by MVI.

Similarly, in [7], we propose an easily implementable stopping criterion for executing (20):
and its rationale can be seen in the following lemma.

Lemma 3 implies that the iterate is a solution of MVI when the inequality (23) holds with . Some techniques of establishing the error bounds in [13] can help us analyze how precisely the iterate satisfies the optimality conditions when the proposed stopping criterion is satisfied with a tolerance .

Lemma 4. Let be the solution of the problem (19), and let be a corresponding Lagrange multiplier associated with the linear constraint. Then, the sequence generated by (20) satisfies

Proof. By invoking the first-order optimality condition for the related subproblem in (20), for any , , we get
Setting () in (25), we have
On the other hand, setting in (21), it follows that
Adding (26) and (27), we obtain
With the rearrangement of the above inequalities, we derive that
Adding the above inequalities (29), we have
The proof is complete.

Hereafter, we define a matrix which will make the notation of proof more succinct. More specifically, let
Obviously, is a positive semidefinite matrix, only for analysis convenience; we denote

Lemma 5. Let be a solution of MVI, and let the sequence be generated by (20). Then, one has

Proof. From (20) and Lemma 4, we have
Since
and , we can get
Using Cauchy-Schwarz inequality, we have
Substituting (36) and (37) into (34), we get
Since is strongly convex, from the strong monotonicity of the subdifferential mapping (with the modulus ), then we have
where , , for any .By using the notion of , from (38) we have
The proof is complete.

Theorem 6. Under Assumption 1, for any
the sequence generated by (20) converges to a solution of MVI.

Proof. From Lemma 5, we have
where
From Assumption 1, it follows that
Consequently,
From (45), we have
which means that the generated sequence is bounded.Furthermore, it follows that
which means that
Therefore, we have
Since is nonzero and bounded, from (48) we have
Since is bounded, has at least one cluster point, say . Let be the corresponding subsequence that converges to . Taking a limit along this subsequence in (25) and (49), we obtain ,
which follows that is an optimal Lagrange multiplier. Since is arbitrary, we can set in (46) and conclude that the whole generated sequence converges to a solution of MVI.

5. Conclusions

In this paper, we extend the convergence analysis of the ADMM for the separable convex optimization problem with strongly convex functions to the case in which the individual functions are composites of strongly convex functions with a linear transform. Under further assumptions, we established the global convergence of the algorithm.

It should be admitted that although some problems arising from applications such as traffic assignment fall into our analysis, the problems considered here are too special. Thus, it is far away to solve the open problem of convergence of the ADMM with more than three blocks.

Acknowledgments

Xingju Cai was supported by the National Natural Science Foundation of China (NSFC) Grants nos. 11071122 and 11171159 and by the Doctoral Fund of Ministry of Education of China no. 20103207110002.