Self-supervised multi-task representation learning for sequential medical images
2021
Self-supervised representation learning has achieved promising results for downstream visual tasks in natural images. However, its use in the medical domain, where there is an underlying human structural similarity, remains underexplored. To address this shortcoming, we propose a self-supervised multi-task representation learning framework for sequential 2D medical images, which explicitly aims to exploit the underlying structures via multiple pretext tasks. Unlike the current stateof-the-art methods, which are designed to only pre-train the encoder for instance discrimination tasks, the proposed framework can pre-train the encoder and the decoder at the same time for dense prediction tasks. We evaluate the representations extracted by the proposed framework on two public whole heart segmentation datasets from different domains. The experimental results show that our proposed framework outperforms MoCo V2, a strong representation learning baseline. Given only a small amount of labeled data, the segmentation networks pre-trained by the proposed framework on unlabeled data can achieve better results than their counterparts trained by standard supervised approaches.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
1
Citations
NaN
KQI