A Self-Supervised Model Advance OCTA Image Disease Diagnosis

2022 
Due to the lack of medical image datasets, transfer learning/fine-tuning is generally used to realize disease detection (mainly the ImageNet transfer model). Significant differences of dominance between natural and medical images seriously restrict the performance of the model. In this paper, a contrastive learning method (BY-OCTA) combined with patient metadata is proposed to detect the pathology in fundus OCTA images. This method uses the patient’s metadata to construct positive sample pairs. By introducing super-parameters into the loss function, we can reasonably adjust the approximate proportion of the same patient metadata sample pair, so as to produce a better representation and initialization model. This paper evaluates the performance of downstream tasks by fine-tuning the multi-layer perceptron of the model. Experiments show that the linear model pretrained by BY-OCTA is better than that pretrained by ImageNet and BYOL on multiple datasets. Furthermore, in the case of limited labeled training data, BY-OCTA provides the most significant benefit. This shows that the BY-OCTA pretraining model has better characterization extraction ability and transferability. This method allows a flexible combination of medical opinions and uses metadata to construct positive sample pairs, which can be widely used in medical image interpretation.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []