2.75D: Boosting Learning Efficiency and Capability by Representing 3D Features in 2D.

2020 
In medical imaging, 3D convolutional neural networks (CNNs) have shown superior performance to 2D CNN in numerous deep learning tasks with high dimensional input, proving the added value of 3D spatial information in feature representation. However, 3D CNN requires more training samples to converge, and more computational resources and execution time for both training and inference. Meanwhile, applying transfer learning on 3D CNN is challenging due to a lack of publicly available pre-trained 3D networks. To tackle with these issues, we propose a novel 2D strategical representation of volumetric data, namely 2.75D approach. In our method, the spatial information of 3D images was captured in a single 2D view by a spiral-spinning technique. Therefore, our CNN is intrinsically a 2D network, which can fully leverage pre-trained 2D CNNs for downstream vision problems. We evaluated the proposed method on LUNA16 nodule detection challenge, by comparing the proposed 2.75D method with 2D, 2.5D, 3D counterparts in the nodule false positive reduction. Results show that the proposed method outperforms other counterparts when all methods were trained from scratch. Such performance gain is more pronounced when introducing transfer learning or when training data is limited. In addition, our method achieves a substantial reduce in time consumption of training and inference comparing with the 3D method. Our code will be publicly available.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    48
    References
    0
    Citations
    NaN
    KQI
    []