Glaucoma is an optic neuropathy accompanied by vision loss which can be mapped by visual field (VF) testing revealing characteristic patterns related to the retinal nerve fibre layer anatomy. While detailed knowledge about these patterns is important to understand the anatomic and genetic aspects of glaucoma, current classification schemes are typically predominantly derived qualitatively. Here, we classify glaucomatous vision loss quantitatively by statistically learning prototypical patterns on the convex hull of the data space. In contrast to component-based approaches, this method emphasizes distinct aspects of the data and provides patterns that are easier to interpret for clinicians. Based on 13 231 reliable Humphrey VFs from a large clinical glaucoma practice, we identify an optimal solution with 17 glaucomatous vision loss prototypes which fit well with previously described qualitative patterns from a large clinical study. We illustrate relations of our patterns to retinal structure by a previously developed mathematical model. In contrast to the qualitative clinical approaches, our results can serve as a framework to quantify the various subtypes of glaucomatous visual field loss.
Ophthalmic images and derivatives such as the retinal nerve fiber layer (RNFL) thickness map are crucial for detecting and monitoring ophthalmic diseases (e.g., glaucoma). For computer-aided diagnosis of eye diseases, the key technique is to automatically extract meaningful features from ophthalmic images that can reveal the biomarkers (e.g., RNFL thinning patterns) linked to functional vision loss. However, representation learning from ophthalmic images that links structural retinal damage with human vision loss is non-trivial mostly due to large anatomical variations between patients. The task becomes even more challenging in the presence of image artifacts, which are common due to issues with image acquisition and automated segmentation. In this paper, we propose an artifact-tolerant unsupervised learning framework termed EyeLearn for learning representations of ophthalmic images. EyeLearn has an artifact correction module to learn representations that can best predict artifact-free ophthalmic images. In addition, EyeLearn adopts a clustering-guided contrastive learning strategy to explicitly capture the intra- and inter-image affinities. During training, images are dynamically organized in clusters to form contrastive samples in which images in the same or different clusters are encouraged to learn similar or dissimilar representations, respectively. To evaluate EyeLearn, we use the learned representations for visual field prediction and glaucoma detection using a real-world ophthalmic image dataset of glaucoma patients. Extensive experiments and comparisons with state-of-the-art methods verified the effectiveness of EyeLearn for learning optimal feature representations from ophthalmic images.