Adversarial Network with Dual U-net Model and Multiresolution Loss Computation for Medical Images Registration

2019 
This paper presents a generative adversarial networks (DUGan) for images registration, which is unsupervised without ground-truth deformations. In this registration framework, the generative network is in charge of generating the deformation field, which is the input to the transformation module, whose output is the wrapped images. The wrapped and fixed images pair will be judged the similarity by the discriminator network. The first innovation is that the U-Net model is introduced in the two networks making use of its hierarchical advantages. And the second one is that multiresolution loss computation is adopted to keep the deformation smoothness in the generative network, and guarantee the efficient similarity in the discriminator network. Experiments on brain datasets indicate that our method yields comparable accuracy to state-of-the-art registration methods.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    9
    References
    1
    Citations
    NaN
    KQI
    []