Generation of Multimodal Ground Truth Datasets for Abdominal Medical Image Registration Using CycleGAN.

2020 
Sparsity of annotated data is a major limitation in medical image processing tasks such as registration. Registered multimodal image data are essential for the success of various medical procedures. To overcome the shortage of data, we present a method which allows the generation of annotated, multimodal 4D datasets. We use a CycleGAN network architecture to generate multimodal synthetic data from a digital body phantom and real patient data. The generated T1-weighted MRI, CT, and CBCT images are inherently co-registered. Because organ masks are also provided by the digital body phantom, the generated dataset serves as a ground truth for image segmentation and registration. Realistic simulation of respiration and heartbeat is possible within the framework. Compared to real patient data the synthetic data showed good agreement regarding the image voxel intensity distribution and the noise characteristics. To underline the usability as a registration ground truth, a proof of principle registration was performed. We were able to optimize the registration parameters of the multimodal non-rigid registration in the process, utilizing the liver organ masks for evaluation purposes. The best performing registration setting was able to reduce the average symmetric surface distance (ASSD) of the liver masks from 8.7mm to 0.8mm. Thus, we could demonstrate the applicability of synthetic data for the development of medical image registration algorithms. This approach can be readily adapted for multimodal image segmentation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    47
    References
    0
    Citations
    NaN
    KQI
    []