摘要: |
|
关键词: |
DOI: |
Received:July 29, 2010Revised:April 11, 2011 |
基金项目:This work was partly supported by the National Science Foundation (No.ECCS#0621924, ECCS-#0901562), and the Intelligent Systems Center. |
|
Online optimal control of nonlinear discrete-time systems using approximate dynamic programming |
Travis DIERKS,Sarangapani JAGANNATHAN |
(DRS Sustainment Systems, Inc.;Department of Electrical and Computer Engineering, Missouri University of Science and Technology) |
Abstract: |
In this paper, the optimal control of a class of general affine nonlinear discrete-time (DT) systems is undertaken by solving the Hamilton Jacobi-Bellman (HJB) equation online and forward in time. The proposed approach, referred normally as adaptive or approximate dynamic programming (ADP), uses online approximators (OLAs) to solve
the infinite horizon optimal regulation and tracking control problems for affine nonlinear DT systems in the presence of unknown internal dynamics. Both the regulation and tracking controllers are designed using OLAs to obtain the optimal feedback control signal and its associated cost function. Additionally, the tracking controller design entails a feedforward portion that is derived and approximated using an additional OLA for steady state conditions. Novel update laws for tuning the unknown parameters of the OLAs online are derived. Lyapunov techniques are used to show that all signals are uniformly ultimately bounded and that the approximated control signals approach the optimal control inputs with small bounded error. In the absence of OLA reconstruction errors, an optimal control is demonstrated. Simulation results verify that all OLA parameter estimates remain bounded, and the proposed OLA-based optimal control scheme tunes itself to reduce the cost HJB equation. |
Key words: Online nonlinear optimal control Hamilton Jacobi-Bellman Online approximators Discrete-time systems |