引用本文: | 伍瑞卓,张兴龙,徐昕,张昌昕.基于高斯过程建模的移动机器人学习预测控制方法[J].控制理论与应用,2023,40(12):2236~2246.[点击复制] |
WU Rui-zhuo,ZHANG Xing-long,XU Xin,ZHANG Chang-xin.Learning predictive tracking control method with Gaussian process modeling for mobile robots[J].Control Theory and Technology,2023,40(12):2236~2246.[点击复制] |
|
基于高斯过程建模的移动机器人学习预测控制方法 |
Learning predictive tracking control method with Gaussian process modeling for mobile robots |
摘要点击 1094 全文点击 345 投稿时间:2023-04-20 修订日期:2023-12-11 |
查看全文 查看/发表评论 下载PDF阅读器 |
DOI编号 10.7641/CTA.2023.30250 |
2023,40(12):2236-2246 |
中文关键词 高斯过程 学习预测控制 滚动时域强化学习 环境和模型不确定性 无人系统控制技术 |
英文关键词 Guassian process learning predictive control receding horizon reinforcement learning environment and model uncertainty control technique of unmanned system |
基金项目 国家自然科学基金项目(61825305, 62003361, U21A20518)资助. |
|
中文摘要 |
移动机器人在复杂地形条件下面临环境和模型不确定性的挑战, 例如草地、陡坡等环境会对移动机器人的
高精度控制造成影响. 本文提出了一种基于高斯过程建模的移动机器人学习预测控制方法, 能够对环境和模型不确
定性进行实时的建模和预测, 并将该模型用于最优控制策略的学习中, 完成在模型和环境不确定下的机器人运动控
制. 该方法利用高斯过程回归对环境和模型不确定性进行建模, 并结合系统运动学方程得到误差状态模型, 并将该
模型用于滚动时域强化学习中, 通过迭代优化学习最优控制策略. 最后, 针对移动机器人在椭圆和8字形轨迹上的横
向跟踪控制问题, 进行了仿真实验, 并与非线性模型预测控制进行比较. 结果表明, 本文提出的方法能够有效提升
复杂地形条件下控制器的控制性能, 在性能指标上相比未采用高斯过程建模的滚动时域强化学习方法提高20%, 比
非线性模型预测控制方法提高36%, 验证了所提方法的有效性和优越性 |
英文摘要 |
Due to environmental and model uncertainty, mobile robots face significant challenges in tracking control in
complex environments. Dynamic environment, such as meadows, and deep slopes, would result in performance degradation. This paper proposes a learning predictive control method with Gaussian process modeling, that can effectively model
and predict environmental and model uncertainty, then design optimal control strategies utilizing the uncertainty model.
The paper uses Gaussian process regression to model uncertainty and utilize the model to learn the optimal policy in the
receding horizon reinforcement learning algorithm, iterating to learn the optimal control strategy. Aiming at the lateral
tracking control problem of wheeled robots on elliptical and eight-shaped trajectories, simulation experiments were carried out and compared with nonlinear model predictive control methods. The results indicate that the proposed algorithm
effectively enhances the control performance of the controller in complex scenarios, showing a 20% improvement in performance indicators compared to receding horizon reinforcement learning method and a 36% improvement in performance
indicators compared to nonlinear model predictive control method. This verifies the effectiveness and superiority of the
proposed method. |
|
|
|
|
|