quotation:[Copy]
Han Yan1,2,Guangtao Zhang1,et al.[en_title][J].Control Theory and Technology,2024,22(4):612~622.[Copy]
【Print page】 【Online reading】【Download 【PDF Full text】 View/Add CommentDownload reader Close

←Previous page|Page Next →

Back Issue    Advanced search

This Paper:Browse 48   Download 0 本文二维码信息
码上扫一扫!
HanYan1,2,GuangtaoZhang1,WeiCui1,ZhuliangYu1,3,4
0
(1 School of Automation Science and Engineering, South China University of Technology, Guangzhou 510641, Guangdong, China 2 Guangzhou First People’s Hospital, Guangzhou 510180, Guangdong, China;3 Shien-Ming Wu School of Intelligent Engineering, South China University of Technology, Guangzhou 511442, Guangdong, China 4 Institute for Super Robotics (Huangpu), Guangzhou 510000, Guangdong, China)
摘要:
关键词:  
DOI:https://doi.org/10.1007/s11768-024-00231-9
基金项目:This work was supported in part by the Technology Innovation 2030 under Grant 2022ZD0211700.
Multi-modality hierarchical fusion network for lumbar spine segmentation with magnetic resonance images
Han Yan1,2,Guangtao Zhang1,Wei Cui1,Zhuliang Yu1,3,4
(1 School of Automation Science and Engineering, South China University of Technology, Guangzhou 510641, Guangdong, China 2 Guangzhou First People’s Hospital, Guangzhou 510180, Guangdong, China;3 Shien-Ming Wu School of Intelligent Engineering, South China University of Technology, Guangzhou 511442, Guangdong, China 4 Institute for Super Robotics (Huangpu), Guangzhou 510000, Guangdong, China)
Abstract:
For the analysis of spinal and disc diseases, automated tissue segmentation of the lumbar spine is vital. Due to the continuous and concentrated location of the target, the abundance of edge features, and individual differences, conventional automatic segmentation methods perform poorly. Since the success of deep learning in the segmentation of medical images has been shown in the past few years, it has been applied to this task in a number of ways. The multi-scale and multi-modal features of lumbar tissues, however, are rarely explored by methodologies of deep learning. Because of the inadequacies in medical images availability, it is crucial to effectively fuse various modes of data collection for model training to alleviate the problem of insufficient samples. In this paper, we propose a novel multi-modality hierarchical fusion network (MHFN) for improving lumbar spine segmentation by learning robust feature representations from multi-modality magnetic resonance images. An adaptive group fusion module (AGFM) is introduced in this paper to fuse features from various modes to extract crossmodality features that could be valuable. Furthermore, to combine features from low to high levels of cross-modality, we design a hierarchical fusion structure based onAGFM. Compared to the other feature fusionmethods,AGFMismore effective based on experimental results on multi-modality MR images of the lumbar spine. To further enhance segmentation accuracy, we compare our network with baseline fusion structures. Compared to the baseline fusion structures (input-level: 76.27%, layer-level: 78.10%, decision-level: 79.14%), our network was able to segment fractured vertebrae more accurately (85.05%).
Key words:  Lumbar spine segmentation · Deep learning · Multi-modality fusion · Feature fusion