Contextual Information Driven Multi-modal Medical Image Fusion

2017 
ABSTRACTTo utilize context correlation between coefficients in contourlet domain, a novel multi-modal medical image fusion method based on contextual information is proposed. First, the context information of contourlet coefficients are calculated to capture the strong dependencies of coefficients. Second, hidden Markov model based on context information for the contourlet coefficients (C-CHMM) is constructed to describe the characteristics of medical image in a small number of parameters. Further, low-pass subband coefficients are combined by magnitude maximum rule and high-pass subband coefficients are fused by a new C-CHMM driven multi-strategy fusion rule. Finally, the fused image is obtained by inverse contourlet transform. Experimental results demonstrate that the proposed fusion method can effectively suppress the color distortion and provide a better fusion quality compared with some typical fusion methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    32
    References
    4
    Citations
    NaN
    KQI
    []