Mohammadreza Chamanbaz — Research — Distributed Optimization

Distributed Convex Optimization in the Presence of Uncertainty

Numerous problems in statistics, machine learning and decision making can be cast as (uncertain) convex optimization problems. Coming to large-scale optimization problems, it is important to develop algorithms capable of solving the problem in a distributed fashion. In this line of research, we consider a set of processors with limited computation and communications capabilities connected through a network. Each processor has knowledge of an uncertain convex constraint and a common cost known to all processors. The goal of the network is to agree on a decision vector minimizing (or maximizing) the cost while respecting all agents’ constraint by performing local computation and exchanging data with neighbouring processors. We make resource of randomization to handle uncertainty appearing in an arbitrarily complex fashion in the constraints.

A network of processors. The red stars indicate the processors and blue lines show the communication link between the processors.

The objective in a distributed optimization setup is that each computation node performs local computation and communication with neighbors such that all the nodes converge to a common point as the solution. The upper graph shows the objective value of all the nodes in the network and the lower graph shows the distance of the candidate solution of each node to the final solution. It is clear that all the nodes converge to a common solution.