This Paper:Browse 32 Download 0 |
 码上扫一扫! |
Gradient-free distributed online optimization in networks |
YuhangLiu1,WenxiaoZhao2,3,NanZhang1,DongdongLv1,ShuaiZhang1 |
|
(1 Academy of Information and Communication Research, State Grid Information and Telecommunication Group Co., Ltd., Beijing, 102211, China;2 The Key Laboratory of Systems and Control, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, 100190, China
3 School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing, 100049, China) |
|
摘要: |
In this paper, we consider the distributed online optimization problem on a time-varying network, where each agent on the network has its own time-varying objective function and the goal is to minimize the overall loss accumulated. Moreover, we focus on distributed algorithms which do not use gradient information and projection operators to improve the applicability and computational efficiency. By introducing the deterministic differences and the randomized differences to substitute the gradient information of the objective functions and removing the projection operator in the traditional algorithms, we design two kinds of gradient-free distributed online optimization algorithms without projection step, which can economize considerable computational resources as well as has less limitations on the applicability. We prove that both of two algorithms achieves consensus of the estimates and regrets of O(log(T))
for local strongly convex objective, respectively. Finally, a simulation example is provided to verify the theoretical results. |
关键词: Distributed optimization · Online convex optimization · Gradient-free algorithm · Projection-free algorithm |
DOI:https://doi.org/10.1007/s11768-025-00242-0 |
|
基金项目: |
|
Gradient-free distributed online optimization in networks |
Yuhang Liu1,Wenxiao Zhao2,3,Nan Zhang1,Dongdong Lv1,Shuai Zhang1 |
(1 Academy of Information and Communication Research, State Grid Information and Telecommunication Group Co., Ltd., Beijing, 102211, China;2 The Key Laboratory of Systems and Control, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, 100190, China
3 School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing, 100049, China) |
Abstract: |
In this paper, we consider the distributed online optimization problem on a time-varying network, where each agent on the network has its own time-varying objective function and the goal is to minimize the overall loss accumulated. Moreover, we focus on distributed algorithms which do not use gradient information and projection operators to improve the applicability and computational efficiency. By introducing the deterministic differences and the randomized differences to substitute the gradient information of the objective functions and removing the projection operator in the traditional algorithms, we design two kinds of gradient-free distributed online optimization algorithms without projection step, which can economize considerable computational resources as well as has less limitations on the applicability. We prove that both of two algorithms achieves consensus of the estimates and regrets of O(log(T)) for local strongly convex objective, respectively. Finally, a simulation example is provided to verify the theoretical results. |
Key words: Distributed optimization · Online convex optimization · Gradient-free algorithm · Projection-free algorithm |