Unpaired MR Image Homogenisation by Disentangled Representations and Its Uncertainty

2021 
Inter-scanner and inter-protocol differences in MRI datasets are known to induce significant quantification variability. Hence data homogenisation is crucial for a reliable combination of data or observations from different sources. Existing homogenisation methods rely on pairs of images to learn a mapping from a source domain to a reference domain. In real-world, we only have access to unpaired data from the source and reference domains. In this paper, we successfully address this scenario by proposing an unsupervised image-to-image translation framework which models the complex mapping by disentangling the image space into a common content space and a scanner-specific one. We perform image quality enhancement among two MR scanners, enriching the structural information and reducing noise level. We evaluate our method on both healthy controls and multiple sclerosis (MS) cohorts and have seen both visual and quantitative improvement over state-of-the-art GAN-based methods while retaining regions of diagnostic importance such as lesions. In addition, for the first time, we quantify the uncertainty in the unsupervised homogenisation pipeline to enhance the interpretability. Codes are available: https://github.com/hongweilibran/Multi-modal-medical-image-synthesis.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    19
    References
    0
    Citations
    NaN
    KQI
    []