引用本文:刘涵,王宇,马琰.改进聚类的深度神经网络压缩实现方法[J].控制理论与应用,2019,36(7):1130~1136.[点击复制]
LIU Han,WANG Yu,MA Yan.Deep neural networks compression based on improved clustering[J].Control Theory and Technology,2019,36(7):1130~1136.[点击复制]
改进聚类的深度神经网络压缩实现方法
Deep neural networks compression based on improved clustering
摘要点击 2634  全文点击 1129  投稿时间:2017-12-22  修订日期:2018-11-12
查看全文  查看/发表评论  下载PDF阅读器
DOI编号  10.7641/CTA.2018.70592
  2019,36(7):1130-1136
中文关键词  深度神经网络 剪枝 K-Means++聚类 深度网络压缩
英文关键词  deep neural networks pruning K-Means ++ deep network compression
基金项目  国家自然科学基金重点项目(61833013), 陕西省重点研发计划重点项目(2018ZDXM-GY-089), 陕西省现代装备绿色制造协同创新中心研究计划(304-210891704), 陕西省教育厅科学研究计划(2017JS088), 西安理工大学特色研究计划(2016TS023)
作者单位E-mail
刘涵* 西安理工大学 liuhan@xaut.edu.cn 
王宇 西安理工大学  
马琰 西安理工大学  
中文摘要
      深度神经网络通常是过参数化的, 并且深度学习模型存在严重冗余, 这导致了计算和存储的巨大浪费. 针对这个问题, 本文提出了一种基于改进聚类的方法来对深度神经网络进行压缩. 首先通过剪枝策略对正常训练后的网络进行修剪, 然后通过K-Means++聚类得到每层权重的聚类中心从而实现权值共享, 最后进行各层权重的量化. 本文在LeNet、AlexNet和VGG-16上分别进行了实验, 提出的方法最终将深度神经网络整体压缩了30到40倍, 并且没有精度损失. 实验结果表明通过基于改进聚类的压缩方法, 深度神经网络在不损失精度的条件下实现了有效压缩, 这使得深度网络在移动端的部署成为了可能.
英文摘要
      Deep neural networks are typically over-parametrized and there is significant redundancy for deep learning models, which results in a waste of both computation and memory usage. In order to solve the problem, a new method based on improved clustering to compress the deep neural network is proposed. First of all, the network is pruned after the normal training. Then through the K-Means++ clustering the clustering center of each layer is gotten to achieve weight sharing. After the first two steps network weight quantization are also performed. The experiments on LeNet, AlexNet and VGG-16 are carried out, in which the deep neural network are compressed by 30 to 40 times without any loss of precision. The experimental results show that the deep neural network achieves effective compression without loss of accuracy through the method based on improved clustering, which makes the deployment of deep network on the mobile end possible.