Distributed Optimization over Directed Graphs with Row Stochasticity and Constraint Regularity
This paper deals with an optimization problem over a network of agents, where the cost function is the sum of the individual objectives of the agents and the constraint set is the intersection of local constraints. Most existing methods employing subgradient and consensus steps for solving this problem require the weight matrix associated with the network to be column stochastic or even doubly stochastic, conditions that can be hard to arrange in directed networks. Moreover, known convergence analyses for distributed subgradient methods vary depending on whether the problem is unconstrained or constrained, and whether the local constraint sets are identical or nonidentical and compact. The main goals of this paper are: (i) removing the common column stochasticity requirement; (ii) relaxing the compactness assumption, and (iii) providing a unified convergence analysis. Specifically, assuming the communication graph to be fixed and strongly connected and the weight matrix to (only) be row stochastic, a distributed projected subgradient algorithm and its variation are presented to solve the problem for cost functions that are convex and Lipschitz continuous. Based on a regularity assumption on the local constraint sets, a unified convergence analysis is given that can be applied to both unconstrained and constrained problems and without assuming compactness of the constraint sets or an interior point in their intersection. Further, we also establish an upper bound on the absolute objective error evaluated at each agent's available local estimate under a nonincreasing step size sequence. This bound allows us to analyze the convergence rate of both algorithms.
READ FULL TEXT