引用本文: | 王金,刘洁,高常鑫,桑农.基于姿态对齐的行人重识别方法[J].控制理论与应用,2017,34(6):837~842.[点击复制] |
WANG Jin,LIU Jie,GAO Chang-xin,SANG Nong.Ensemble of pose aligned features for person re-identification[J].Control Theory and Technology,2017,34(6):837~842.[点击复制] |
|
基于姿态对齐的行人重识别方法 |
Ensemble of pose aligned features for person re-identification |
摘要点击 5068 全文点击 2193 投稿时间:2016-08-29 修订日期:2016-12-21 |
查看全文 查看/发表评论 下载PDF阅读器 |
DOI编号 10.7641/CTA.2017.60647 |
2017,34(6):837-842 |
中文关键词 行人重识别 姿态对齐 结构化输出支持向量机 |
英文关键词 person re-identification pose alignment structural SVM |
基金项目 |
|
中文摘要 |
行人重识别是指根据输入的某个行人图片, 在视频监控网络中对该行人目标进行检索. 行人的姿态变化和监控场景的亮度变化是该任务的两个主要挑战. 针对行人的姿态变化问题, 本文首先对训练集中行人图片进行稠密
图像块采样获得图像块集合, 然后对每一个图像块提取其局部表观空间特征, 最后在此特征集上聚类得到通用的行人部件字典. 由于该部件字典编码了行人的部件信息, 因此通过该字典内的每一个码元可以建立两幅行人图像中特定图像块之间的对应关系. 将两幅行人图片的图像块集合分别向部件字典投影, 可以获得2幅行人图片姿态对齐后的图像块序列. 针对监控场景的亮度变化问题, 本文在姿态对齐后的图像块上分别提取4种颜色描述子, 并将不同颜色描述子下的图像块相似性进行分数级组合以获得更好的亮度不变性. 其中不同颜色描述子之间的组合系数通过结构化输出支持向量机学习得到. 在常用的视点不变行人重识别(viewpoint invariant pedestrian recognition,VIPeR)数据集上的实验结果表明, 该方法在存在行人姿态变化和场景亮度变化干扰时获得了较好的行人重识别效果. |
英文摘要 |
Person re-identification is the task of finding a person of interest across a network of cameras. Pose variations and illumination changes are two major challenges for person re-identification. In this paper, we propose a method to tackle these two problems. For pose variations, we train a part dictionary using local appearance-spatial features extracted from densely sampled image patches. Each codeword builds correspondence of a pair of patches to be matched between two images. Images are then aligned by projecting to the common part dictionary. By this method, two images are normalized into a list of corresponding patch pairs. For illumination changes, different color descriptors are extracted from each corresponding patch pair. To obtain better invariance against illumination changes, they are finally combined on the score level by structural output learning. Experiments on the highly challenging viewpoint invariant pedestrian recognition
(VIPeR) dataset demonstrate the effectiveness of our approach. |
|
|
|
|
|