quotation:[Copy]
Yuhang Liu, Yungang Liu, Ehsan Soleimani,Amirhossein Nikoofard.[en_title][J].Control Theory and Technology,2025,23(1):207~220.[Copy]
【Print page】 【Online reading】【Download PDF Full text】 View/Add CommentDownload reader Close

←Previous page|Page Next →

Back Issue    Advanced search

This Paper:Browse 130   Download 0 本文二维码信息
码上扫一扫!
Gradient-free distributed online optimization in networks
YuhangLiu,YungangLiu,EhsanSoleimani,AmirhosseinNikoofard
0
(Academy of Information and Communication Research, State Grid Information and Telecommunication Group Co., Ltd., Beijing, 102211, China;School of Control Science and Engineering, Shandong University, Jinan, 250061, Shandong, China;Faculty of Electrical Engineering, Missouri University of Science and Technology, Rolla, 65409, MO, USA; Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran)
摘要:
In this paper, we consider the distributed online optimization problem on a time-varying network, where each agent on the network has its own time-varying objective function and the goal is to minimize the overall loss accumulated. Moreover, we focus on distributed algorithms which do not use gradient information and projection operators to improve the applicability and computational efficiency. By introducing the deterministic differences and the randomized differences to substitute the gradient information of the objective functions and removing the projection operator in the traditional algorithms, we design two kinds of gradient-free distributed online optimization algorithms without projection step, which can economize considerable computational resources as well as has less limitations on the applicability. We prove that both of two algorithms achieves consensus of the estimates and regrets of O(log(T)) for local strongly convex objective, respectively. Finally, a simulation example is provided to verify the theoretical results.
关键词:  Distributed optimization · Online convex optimization · Gradient-free algorithm · Projection-free algorithm
DOI:https://doi.org/10.1007/s11768-025-00242-0
基金项目:This work was supported in part by the European Regional Development Fund under Grant KK.01.1.1.01.0009 (DATACROSS).
Gradient-free distributed online optimization in networks
Yuhang Liu,Yungang Liu,Ehsan Soleimani,Amirhossein Nikoofard
(Academy of Information and Communication Research, State Grid Information and Telecommunication Group Co., Ltd., Beijing, 102211, China;School of Control Science and Engineering, Shandong University, Jinan, 250061, Shandong, China;Faculty of Electrical Engineering, Missouri University of Science and Technology, Rolla, 65409, MO, USA; Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran)
Abstract:
In this paper, we consider the distributed online optimization problem on a time-varying network, where each agent on the network has its own time-varying objective function and the goal is to minimize the overall loss accumulated. Moreover, we focus on distributed algorithms which do not use gradient information and projection operators to improve the applicability and computational efficiency. By introducing the deterministic differences and the randomized differences to substitute the gradient information of the objective functions and removing the projection operator in the traditional algorithms, we design two kinds of gradient-free distributed online optimization algorithms without projection step, which can economize considerable computational resources as well as has less limitations on the applicability. We prove that both of two algorithms achieves consensus of the estimates and regrets of O(log(T)) for local strongly convex objective, respectively. Finally, a simulation example is provided to verify the theoretical results.
Key words:  Distributed optimization · Online convex optimization · Gradient-free algorithm · Projection-free algorithm