quotation:[Copy]
Silvia Ferrari,Jagannathan Sarangapani,Frank L. Lewis.[en_title][J].Control Theory and Technology,2011,9(3):309~309.[Copy]
【Print page】 【Online reading】【Download 【PDF Full text】 View/Add CommentDownload reader Close

←Previous page|Page Next →

Back Issue    Advanced search

This Paper:Browse 1987   Download 411 本文二维码信息
码上扫一扫!
SilviaFerrari,JagannathanSarangapani,FrankL.Lewis
0
(Laboratory for Intelligent Systems and Control (LISC), Department of Mechanical Engineering & Materials Science, Duke University;Department of Electrical & Computer Engineering, University of Missouri-Rolla;Automation and Robotics Research Institute, The University of Texas at Arlington)
摘要:
关键词:  
DOI:
Received:May 18, 2011Revised:May 18, 2011
基金项目:
Editorial: Special issue on approximate dynamic programming and reinforcement learning
Silvia Ferrari,Jagannathan Sarangapani,Frank L. Lewis
(Laboratory for Intelligent Systems and Control (LISC), Department of Mechanical Engineering & Materials Science, Duke University;Department of Electrical & Computer Engineering, University of Missouri-Rolla;Automation and Robotics Research Institute, The University of Texas at Arlington)
Abstract:
We are extremely pleased to present this special issue of the Journal of Control Theory and Applications. Approximate dynamic programming (ADP) is a general and effective approach for solving optimal control and estimation problems by adapting to uncertain environments over time. ADP optimizes the sensing objectives accrued over a future time interval with respect to an adaptive control law, conditioned on prior knowledge of the system, its state, and uncertainties. A numerical search over the present value of the control minimizes a Hamilton-Jacobi-Bellman (HJB) equation providing a basis for real-time, approximate optimal control.
Key words:  Special issue  approximate dynamic programming (ADP)  reinforcement learning