引用本文: | 曹杰,顾斌杰,潘丰,熊伟丽.精确增量式ε型孪生支持向量回归机[J].控制理论与应用,2022,39(6):1020~1032.[点击复制] |
CAO Jie,GU Bin-jie,PAN Feng,XIONG Wei-li.Accurate incremental ε-twin support vector regression[J].Control Theory and Technology,2022,39(6):1020~1032.[点击复制] |
|
精确增量式ε型孪生支持向量回归机 |
Accurate incremental ε-twin support vector regression |
摘要点击 1399 全文点击 512 投稿时间:2020-08-08 修订日期:2022-03-16 |
查看全文 查看/发表评论 下载PDF阅读器 |
DOI编号 10.7641/CTA.2021.00517 |
2022,39(6):1020-1032 |
中文关键词 机器学习 增量学习 在线学习 孪生支持向量回归机 学习算法 可行性分析 有限收敛性分析 |
英文关键词 machine learning incremental learning online learning twin support vector regression learning algorithms feasibility analysis finite convergence analysis |
基金项目 国家自然科学基金项目(61773182)资助. |
|
中文摘要 |
为了解决现有ε型孪生支持向量回归机的训练算法无法高效处理线性回归的增量学习问题, 提出了一种精
确增量式ε型孪生支持向量回归机(AIETSVR). 首先通过计算新增样本的拉格朗日乘子以及调整边界样本的拉格朗
日乘子, 尽可能减少新增样本的二次损失对原有样本的影响, 使得大部分原有样本依然满足Karush–Kuhn–Tucker
(KKT)条件, 从而获得一个有效的初始状态; 其次对异常拉格朗日乘子逐步调整至满足KKT条件; 然后从理论上分
析了AIETSVR的可行性和有限收敛性; 最后在基准测试数据集上进行仿真. 结果表明, 与现有的代表性算法相比,
AIETSVR能够获得精确解, 在缩短大规模数据集的训练时间上优势显著. |
英文摘要 |
In ε-twin support vector regression, to solve the problem that the existing algorithms can not efficiently deal
with the incremental learning for linear regression, an accurate incremental ε-twin support vector regression (AIETSVR) is
proposed. First, by calculating the Lagrangian multiplier of the new sample and adjusting the Lagrangian multipliers of the
boundary samples, the influence generated by the quadratic loss of the new sample on the existing samples is minimized.
Therefore, most of the existing samples still meet the Karush–Kuhn–Tucker (KKT) conditions, and a valid initial state is
obtained. Then, the exceptional Lagrangian multipliers are gradually adjusted to conform to the KKT conditions. Next,
the feasibility and finite convergence of AIETSVR are theoretically analyzed. Finally, the simulation is conducted on
benchmark datasets. Compared with the existing representative algorithms, the results show that AIETSVR can obtain
accurate solutions and has a great advantage in shortening training time for large-scale dataset. |
|
|
|
|
|