摘要: |
|
关键词: |
DOI: |
Received:July 06, 2011Revised:February 29, 2012 |
基金项目:This work was supported by National Nature Science Foundation of China (Nos. 61074058, 60874042), the Chinese Postdoctoral Science Foundation (No. 200902483), the Specialized Research Fund for the Doctoral Program of Higher Education of China (No. 20090162120068), and the Central South University Innovation Project (No. 2011ssxt221). |
|
Cooperative learning with joint state value approximation for multi-agent systems |
Xin CHEN,Gang CHEN,Weihua CAO,Min WU |
(School of Information Science and Engineering, Central South University) |
Abstract: |
This paper relieves the ‘curse of dimensionality’ problem, which becomes intractable when scaling reinforcement learning to multi-agent systems. This problem is aggravated exponentially as the number of agents increases, resulting in large memory requirement and slowness in learning speed. For cooperative systems which widely exist in multi-agent systems, this paper proposes a new multi-agent Q-learning algorithm based on decomposing the joint state and joint action learning into two learning processes, which are learning individual action and the maximum value of the joint state approximately. The latter process considers others’ actions to insure that the joint action is optimal and supports the updating of the former one. The simulation results illustrate that the proposed algorithm can learn the optimal joint behavior with smaller memory and faster learning speed compared with friend-Q learning and independent learning. |
Key words: Multi-agent system Q-learning Cooperative system Curse of dimensionality Decomposition |