quotation:[Copy]
Yuchen Yang1,Kaihong Lu2,Long Wang1.[en_title][J].Control Theory and Technology,2024,22(3):419~430.[Copy]
【Print page】 【Online reading】【Download 【PDF Full text】 View/Add CommentDownload reader Close

←Previous page|Page Next →

Back Issue    Advanced search

This Paper:Browse 182   Download 0 本文二维码信息
码上扫一扫!
Online distributed optimization with stochastic gradients: high probability bound of regrets
YuchenYang1,KaihongLu2,LongWang1
0
(1 Center for Systems and Control, College of Engineering, Peking University, Beijing 100871, China;2 College of Electrical Engineering and Automation, Shandong University of Science and Technology, Qingdao 266590, Shandong, China)
摘要:
In this paper, the problem of online distributed optimization subject to a convex set is studied via a network of agents. Each agent only has access to a noisy gradient of its own objective function, and can communicate with its neighbors via a network. To handle this problem, an online distributed stochastic mirror descent algorithm is proposed. Existing works on online distributed algorithms involving stochastic gradients only provide the expectation bounds of the regrets. Different from them, we study the high probability bound of the regrets, i.e., the sublinear bound of the regret is characterized by the natural logarithm of the failure probability’s inverse. Under mild assumptions on the graph connectivity, we prove that the dynamic regret grows sublinearly with a high probability if the deviation in the minimizer sequence is sublinear with the square root of the time horizon. Finally, a simulation is provided to demonstrate the effectiveness of our theoretical results.
关键词:  Distributed optimization · Online optimization · Stochastic gradient · High probability
DOI:https://doi.org/10.1007/s11768-023-00186-3
基金项目:
Online distributed optimization with stochastic gradients: high probability bound of regrets
Yuchen Yang1,Kaihong Lu2,Long Wang1
(1 Center for Systems and Control, College of Engineering, Peking University, Beijing 100871, China;2 College of Electrical Engineering and Automation, Shandong University of Science and Technology, Qingdao 266590, Shandong, China)
Abstract:
In this paper, the problem of online distributed optimization subject to a convex set is studied via a network of agents. Each agent only has access to a noisy gradient of its own objective function, and can communicate with its neighbors via a network. To handle this problem, an online distributed stochastic mirror descent algorithm is proposed. Existing works on online distributed algorithms involving stochastic gradients only provide the expectation bounds of the regrets. Different from them, we study the high probability bound of the regrets, i.e., the sublinear bound of the regret is characterized by the natural logarithm of the failure probability’s inverse. Under mild assumptions on the graph connectivity, we prove that the dynamic regret grows sublinearly with a high probability if the deviation in the minimizer sequence is sublinear with the square root of the time horizon. Finally, a simulation is provided to demonstrate the effectiveness of our theoretical results.
Key words:  Distributed optimization · Online optimization · Stochastic gradient · High probability