Wayne Rasband of NIH has created ImageJ, an open source Java-written program that is now at version 1.31 and is used for many imaging applications, including those that that span the gamut from skin analysis to neuroscience. ImageJ is in the public domain and runs on any operating system (OS). ImageJ is easy to use and can do many imaging manipulations. A very large and knowledgeable group makes up the user community for ImageJ. Topics covered are imaging abilities; cross platform; image formats support as of June 2004; extensions, including macros and plug-ins; and imaging library. NIH reports tens of thousands of downloads at a rate of about 24,000 per month currently. ImageJ can read most of the widely used and significant formats used in biomedical images. Manipulations supported are read/write of image files and operations on separate pixels, image regions, entire images, and volumes (stacks in ImageJ). Basic operations supported include convolution, edge detection, Fourier transform, histogram and particle analyses, editing and color manipulation, and more advanced operations, as well as visualization. For assistance in using ImageJ, users e-mail each other, and the user base is highly knowledgeable and will answer requests on the mailing list. A thorough manual with many examples and illustrations has been written by Tony Collins of the Wright Cell Imaging Facility at Toronto Western Research Institute and is available, along with other listed resources, via the Web.
Abstract Purpose To increase the efficiency of retinal image grading, algorithms for automated grading have been developed, such as the IDx‐DR 2.0 device. We aimed to determine the ability of this device, incorporated in clinical work flow, to detect retinopathy in persons with type 2 diabetes. Methods Retinal images of persons treated by the Hoorn Diabetes Care System (DCS) were graded by the IDx‐DR device and independently by three retinal specialists using the International Clinical Diabetic Retinopathy severity scale (ICDR) and EURODIAB criteria. Agreement between specialists was calculated. Results of the IDx‐DR device and experts were compared using sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV), distinguishing between referable diabetic retinopathy (RDR) and vision‐threatening retinopathy (VTDR). Area under the receiver operating characteristic curve (AUC) was calculated. Results Of the included 1415 persons, 898 (63.5%) had images of sufficient quality according to the experts and the IDx‐DR device. Referable diabetic retinopathy (RDR) was diagnosed in 22 persons (2.4%) using EURODIAB and 73 persons (8.1%) using ICDR classification. Specific intergrader agreement ranged from 40% to 61%. Sensitivity, specificity, PPV and NPV of IDx‐DR to detect RDR were 91% (95% CI: 0.69–0.98), 84% (95% CI: 0.81–0.86), 12% (95% CI: 0.08–0.18) and 100% (95% CI: 0.99–1.00; EURODIAB) and 68% (95% CI: 0.56–0.79), 86% (95% CI: 0.84–0.88), 30% (95% CI: 0.24–0.38) and 97% (95% CI: 0.95–0.98; ICDR). The AUC was 0.94 (95% CI: 0.88–1.00; EURODIAB) and 0.87 (95% CI: 0.83–0.92; ICDR). For detection of VTDR, sensitivity was lower and specificity was higher compared to RDR. AUC's were comparable. Conclusion Automated grading using the IDx‐DR device for RDR detection is a valid method and can be used in primary care, decreasing the demand on ophthalmologists.
In this paper, we propose the use of multiscale amplitude-modulation-frequency-modulation (AM-FM) methods for discriminating between normal and pathological retinal images. The method presented in this paper is tested using standard images from the early treatment diabetic retinopathy study. We use 120 regions of 40 × 40 pixels containing four types of lesions commonly associated with diabetic retinopathy (DR) and two types of normal retinal regions that were manually selected by a trained analyst. The region types included microaneurysms, exudates, neovascularization on the retina, hemorrhages, normal retinal background, and normal vessels patterns. The cumulative distribution functions of the instantaneous amplitude, the instantaneous frequency magnitude, and the relative instantaneous frequency angle from multiple scales are used as texture feature vectors. We use distance metrics between the extracted feature vectors to measure interstructure similarity. Our results demonstrate a statistical differentiation of normal retinal structures and pathological lesions based on AM-FM features. We further demonstrate our AM-FM methodology by applying it to classification of retinal images from the MESSIDOR database. Overall, the proposed methodology shows significant capability for use in automatic DR screening.
Evaluation of optic nerve head (ONH) structure is a commonly used clinical technique for both diagnosis and monitoring of glaucoma. Glaucoma is associated with characteristic changes in the structure of the ONH. We present a method for computationally identifying ONH structural features using both imaging and genetic data from a large cohort of participants at risk for primary open angle glaucoma (POAG). Using 1054 participants from the Ocular Hypertension Treatment Study, ONH structure was measured by application of a stereo correspondence algorithm to stereo fundus images. In addition, the genotypes of several known POAG genetic risk factors were considered for each participant. ONH structural features were discovered using both a principal component analysis approach to identify the major modes of variance within structural measurements and a linear discriminant analysis approach to capture the relationship between genetic risk factors and ONH structure. The identified ONH structural features were evaluated based on the strength of their associations with genotype and development of POAG by the end of the OHTS study. ONH structural features with strong associations with genotype were identified for each of the genetic loci considered. Several identified ONH structural features were significantly associated (p < 0.05) with the development of POAG after Bonferroni correction. Further, incorporation of genetic risk status was found to substantially increase performance of early POAG prediction. These results suggest incorporating both imaging and genetic data into ONH structural modeling significantly improves the ability to explain POAG-related changes to ONH structure.
BACKGROUND AND OBJECTIVE: To use automated segmentation software to analyze spectral-domain optical coherence tomography (SD-OCT) scans and evaluate the effectiveness of aflibercept (Eylea; Regeneron, Tarrytown, NY) in the treatment of patients with exudative age-related macular degeneration (AMD) refractory to other treatments. PATIENTS AND METHODS: Retrospective chart review of 16 patients refractory to bevacizumab (Avastin; Genentech, South San Francisco, CA)/ranibizumab (Lucentis; Genentech, San Francisco, CA) treatment was conducted. Visual acuity, central foveal thickness (CFT), maximum fluid height, pigment epithelial detachment (PED) volume, sub-retinal fluid (SRF) volume, fluid-free time interval, and adverse effects were evaluated. Automated segmentation analysis was used to quantify improvement. RESULTS: With aflibercept treatment, there was a statistically significant improvement in visual acuity by 1 line ( P = .020), in CFT by 74.02 µm ( P = .001), and in maximum fluid height by 31.9 µm ( P = .011). Total PED and SRF volume also decreased significantly by 1.50 µm 3 × 10 8 µm 3 ( P = .013). Anatomic improvement was confirmed by automated segmentation analysis. CONCLUSION: This study demonstrates utility of automated segmentation software in quantifying anatomic improvement with aflibercept treatment in exudative AMD refractory to other anti-vascular endothelial growth factor treatments. [ Ophthalmic Surg Lasers Imaging Retina . 2016;47:245–251.]
A great effort of the research community is geared towards the creation of an automatic screening system able to promptly detect diabetic retinopathy with the use of fundus cameras. In addition, there are some documented approaches for automatically judging the image quality. We propose a new set of features independent of field of view or resolution to describe the morphology of the patient's vessels. Our initial results suggest that these features can be used to estimate the image quality in a time one order of magnitude shorter than previous techniques.