摘要: |
|
关键词: |
DOI: |
Received:October 10, 2015Revised:October 27, 2015 |
基金项目: |
|
Stochastic sub-gradient algorithm for distributedoptimization with random sleep scheme |
P. Yi,Y. Hong |
(Key Lab of Systems and Control, Academy of Mathematics and Systems Science, Chinese Academy of Sciences) |
Abstract: |
In this paper, we consider a distributed convex optimization problem of a multi-agent system with the global objective function
as the sum of agents’ individual objective functions. To solve such an optimization problem, we propose a distributed stochastic
sub-gradient algorithm with random sleep scheme. In the random sleep scheme, each agent independently and randomly decides
whether to inquire the sub-gradient information of its local objective function at each iteration. The algorithm not only generalizes
distributed algorithms with variable working nodes and multi-step consensus-based algorithms, but also extends some existing
randomized convex set intersection results. We investigate the algorithm convergence properties under two types of stepsizes:
the randomized diminishing stepsize that is heterogeneous and calculated by individual agent, and the fixed stepsize that is
homogeneous. Then we prove that the estimates of the agents reach consensus almost surely and in mean, and the consensus
point is the optimal solution with probability 1, both under randomized stepsize. Moreover, we analyze the algorithm error bound
under fixed homogeneous stepsize, and also show how the errors depend on the fixed stepsize and update rates. |
Key words: Distributed optimization, sub-gradient algorithm, random sleep, multi-agent systems, randomized algorithm |