Keratoconus detection using deep learning of colour-coded maps with anterior segment optical coherence tomography: a diagnostic accuracy study

2019 
Objective To evaluate the diagnostic accuracy of keratoconus using deep learning of the colour-coded maps measured with the swept-source anterior segment optical coherence tomography (AS-OCT). Design A diagnostic accuracy study. Setting A single-centre study. Participants A total of 304 keratoconic eyes (grade 1 (108 eyes), 2 (75 eyes), 3 (42 eyes) and 4 (79 eyes)) according to the Amsler-Krumeich classification, and 239 age-matched healthy eyes. Main outcome measures The diagnostic accuracy of keratoconus using deep learning of six colour-coded maps (anterior elevation, anterior curvature, posterior elevation, posterior curvature, total refractive power and pachymetry map). Results Deep learning of the arithmetical mean output data of these six maps showed an accuracy of 0.991 in discriminating between normal and keratoconic eyes. For single map analysis, posterior elevation map (0.993) showed the highest accuracy, followed by posterior curvature map (0.991), anterior elevation map (0.983), corneal pachymetry map (0.982), total refractive power map (0.978) and anterior curvature map (0.976), in discriminating between normal and keratoconic eyes. This deep learning also showed an accuracy of 0.874 in classifying the stage of the disease. Posterior curvature map (0.869) showed the highest accuracy, followed by corneal pachymetry map (0.845), anterior curvature map (0.836), total refractive power map (0.836), posterior elevation map (0.829) and anterior elevation map (0.820), in classifying the stage. Conclusions Deep learning using the colour-coded maps obtained by the AS-OCT effectively discriminates keratoconus from normal corneas, and furthermore classifies the grade of the disease. It is suggested that this will become an aid for improving the diagnostic accuracy of keratoconus in daily practice. Clinical trial registration number 000034587.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    29
    References
    33
    Citations
    NaN
    KQI
    []