Fundus Retinal imaging is an easy-to-acquire modality typically used for monitoring eye health. Current evidence indicates that the retina, and its vasculature in particular, is associated with other disease processes making it an ideal candidate for biomarker discovery. The development of these biomarkers has typically relied on predefined measurements, which makes the development process slow. Recently, representation learning algorithms such as general purpose convolutional neural networks or vasculature embeddings have been proposed as an approach to learn imaging biomarkers directly from the data, hence greatly speeding up their discovery. In this work, we compare and contrast different state-of-the-art retina biomarker discovery methods to identify signs of past stroke in the retinas of a curated patient cohort of 2,472 subjects from the UK Biobank dataset. We investigate two convolutional neural networks previously used in retina biomarker discovery and directly trained on the stroke outcome, and an extension of the vasculature embedding approach which infers its feature representation from the vasculature and combines the information of retinal images from both eyes.In our experiments, we show that the pipeline based on vasculature embeddings has comparable or better performance than other methods with a much more compact feature representation and ease of training.Clinical Relevance-This study compares and contrasts three retinal biomarker discovery strategies, using a curated dataset of subject evidence, for the analysis of the retina as a proxy in the assessment of clinical outcomes, such as stroke risk.
Vessel segmentation in fundus images permits understanding retinal diseases and computing image-based biomarkers. However, manual vessel segmentation is a time-consuming process. Optical coherence tomography angiography (OCT-A) allows direct, non-invasive estimation of retinal vessels. Unfortunately, compared to fundus images, OCT-A cameras are more expensive, less portable, and have a reduced field of view. We present an automated strategy relying on generative adversarial networks to create vascular maps from fundus images without training using manual vessel segmentation maps. Further post-processing used for standard en face OCT-A allows obtaining a vessel segmentation map. We compare our approach to state-of-the-art vessel segmentation algorithms trained on manual vessel segmentation maps and vessel segmentations derived from OCT-A. We evaluate them from an automatic vascular segmentation perspective and as vessel density estimators, i.e., the most common imaging biomarker for OCT-A used in studies. Using OCT-A as a training target over manual vessel delineations yields improved vascular maps for the optic disc area and compares to the best-performing vessel segmentation algorithm in the macular region. This technique could reduce the cost and effort incurred when training vessel segmentation algorithms. To incentivize research in this field, we will make the dataset publicly available to the scientific community.
Multiple sclerosis (MS) is a demyelinating disease that affects the central nervous system (CNS) and is characterized by the presence of CNS lesions. Volumetric measures of tissues, including lesions, on magnetic resonance imaging (MRI) play key roles in the clinical management and treatment evaluation of MS patient. Recent advances in deep learning (DL) show promising results for automated medical image segmentation. In this work, we used deep convolutional neural networks (CNNs) for brain tissue classification on MRI acquired from MS patients in a large multi-center clinical trial. Multi-channel MRI data that included T1-weighted, dual-echo fast spin echo, and fluid-attenuated inversion recovery images were acquired on these patients. The pre-processed images (following co-registration, skull stripping, bias field correction, intensity normalization, and de-noising) served as the input to the CNN for tissue classification. The network was trained using expert-validated segmentation. Quantitative assessment showed high Dice similarity coefficients between the CNN and the validated segmentation, with DSC values of 0.94 for white matter and grey matter, 0.97 for cerebrospinal fluid, and 0.85 for T2 hyperintense lesions. These results suggest that deep neural networks can successfully segment brain tissues, which is crucial for reliable assessment of tissue volumes in MS.
Background: Intracerebral hemorrhage (ICH) constitutes upto 40% mortality in first 30 days. Early identification of predictors of hematoma expansion (HE) may improve efforts to prevent its occurrence and improve clinical outcome. Methods: We identified patients with ICH and follow-up imaging. HE was defined as a combination of absolute volume increase of 6cc, new IVH, or proportional increase of 33% in our dataset on 72h follow up scan. Presence of IVH was also included in hematoma expansion. We evaluated the predictive ability of 3 machine learning classifiers, Random Forest, Support Vector Machine (with RBF kernel) and Logistic regression (with L1 regularization). The evaluation was done using a K-fold stratified cross validation to avoid overfitting. K was selected to be the number of subjects with HE. The features employed by classifiers were entirely based on the baseline imaging: Hematoma volume, Systolic BP, Diastolic BP, Black hole signs, Island signs, Blend signs, Fluid level, Swirl signs, Spot signs. Results: Our dataset comprised of 91 patients (n=21 HE, n=70 no HE). According to the area under the ROC (AUC), the two top performing classifiers were Support Vector Machine (AUC=0.66 CI 0.50-0.79) and Logistic Regression (AUC=0.64 CI 0.49-0.80). The statistical significance of the prediction is confirmed by the Mann-Whitney U test, p=0.01 and p=0.04 respectively. Random Forest did not reach statistical significance. Finally, we evaluated what were the highest and lowest weighted features across the cross-validation with Logistic Regression. The 3 top features were: presence of black hole and island signs and the systolic blood pressure. The 3 least useful features were: presence of spot and swirl signs and hematoma volume. Conclusion: Using our cohort, we developed a machine learning algorithm that predicts hematoma expansion using imaging features and blood pressure. MBL provided better sensitivity of these imaging markers compared with previous studies.
The foveal avascular zone (FAZ) is a retinal area devoid of capillaries and associated with multiple retinal pathologies and visual acuity. Optical Coherence Tomography Angiography (OCT-A) is a very effective means of visualizing retinal vascular and avascular areas, but its use remains limited to research settings due to its complex optics limiting availability. On the other hand, fundus photography is widely available and often adopted in population studies. In this work, we test the feasibility of estimating the FAZ from fundus photos using three different approaches. The first two approaches rely on pixel-level and image-level FAZ information to segment FAZ pixels and regress FAZ area, respectively. The third is a training mask-free pipeline combining saliency maps with an active contours approach to segment FAZ pixels while being trained on image-level measures of the FAZ areas. This enables training FAZ segmentation methods without manual alignment of fundus and OCT-A images, a time-consuming process, which limits the dataset that can be used for training. Segmentation methods trained on pixel-level labels and image-level labels had good agreement with masks from a human grader (respectively DICE of 0.45 and 0.4). Results indicate the feasibility of using fundus images as a proxy to estimate the FAZ when angiography data is not available.
Introduction Vessel segmentation in fundus images is essential in the diagnosis and prognosis of retinal diseases and the identification of image-based biomarkers. However, creating a vessel segmentation map can be a tedious and time consuming process, requiring careful delineation of the vasculature, which is especially hard for microcapillary plexi in fundus images. Optical coherence tomography angiography (OCT-A) is a relatively novel modality visualizing blood flow and microcapillary plexi not clearly observed in fundus photography. Unfortunately, current commercial OCT-A cameras have various limitations due to their complex optics making them more expensive, less portable, and with a reduced field of view (FOV) compared to fundus cameras. Moreover, the vast majority of population health data collection efforts do not include OCT-A data. We believe that strategies able to map fundus images to en-face OCT-A can create precise vascular vessel segmentation with less effort. In this dataset, called UTHealth - Fundus and Synthetic OCT-A Dataset (UT-FSOCTA), we include fundus images and en-face OCT-A images for 112 subjects. The two modalities have been manually aligned to allow for training of medical imaging machine learning pipelines. This dataset is accompanied by a manuscript that describes an approach to generate fundus vessel segmentations using OCT-A for training (Coronado et al., 2022). We refer to this approach as "Synthetic OCT-A". Fundus Imaging We include 45 degree macula centered fundus images that cover both macula and optic disc. All images were acquired using a OptoVue iVue fundus camera without pupil dilation. The full images are available at the fov45/fundus directory. In addition, we extracted the FOVs corresponding to the en-face OCT-A images collected in cropped/fundus/disc and cropped/fundus/macula. Enface OCT-A We include the en-face OCT-A images of the superficial capillary plexus. All images were acquired using an OptoVue Avanti OCT camera with OCT-A reconstruction software (AngioVue). Low quality images with errors in the retina layer segmentations were not included. En-face OCTA images are located in cropped/octa/disc and cropped/octa/macula. In addition, we include a denoised version of these images where only vessels are included. This has been performed automatically using the ROSE algorithm (Ma et al. 2021). These can be found in cropped/GT_OCT_net/noThresh and cropped/GT_OCT_net/Thresh, the former contains the probabilities of the ROSE algorithm the latter a binary map. Synthetic OCT-A We train a custom conditional generative adversarial network (cGAN) to map a fundus image to an en face OCT-A image. Our model consists of a generator synthesizing en face OCT-A images from corresponding areas in fundus photographs and a discriminator judging the resemblance of the synthesized images to the real en face OCT-A samples. This allows us to avoid the use of manual vessel segmentation maps altogether. The full images are available at the fov45/synthetic_octa directory. Then, we extracted the FOVs corresponding to the en-face OCT-A images collected in cropped/synthetic_octa/disc and cropped/synthetic_octa/macula. In addition, we performed the same denoising ROSE algorithm (Ma et al. 2021) used for the original enface OCT-A images, the results are available in cropped/denoised_synthetic_octa/noThresh and cropped/denoised_synthetic_octa/Thresh, the former contains the probabilities of the ROSE algorithm the latter a binary map. Other Fundus Vessel Segmentations Included In this dataset, we have also included the output of two recent vessel segmentation algorithms trained on external datasets with manual vessel segmentations. SA-Unet (Li et. al, 2020) and IterNet (Guo et. al, 2021). SA-Unet. The full images are available at the fov45/SA_Unet directory. Then, we extracted the FOVs corresponding to the en-face OCT-A images collected in cropped/SA_Unet/disc and cropped/SA_Unet/macula. IterNet. The full images are available at the fov45/Iternet directory. Then, we extracted the FOVs corresponding to the en-face OCT-A images collected in cropped/Iternet/disc and cropped/Iternet/macula. Train/Validation/Test Replication In order to replicate or compare your model to the results of our paper, we report below the data split used. Training subjects IDs: 1 - 25 Validation subjects IDs: 26 - 30 Testing subjects IDs: 31 - 112 Data Acquisition This dataset was acquired at the Texas Medical Center - Memorial Hermann Hospital in accordance with the guidelines from the Helsinki Declaration and it was approved by the UTHealth IRB with protocol HSC-MS-19-0352. User Agreement The UT-FSOCTA dataset is free to use for non-commercial scientific research only. In case of any publication the following paper needs to be cited Coronado I, Pachade S, Trucco E, Abdelkhaleq R, Yan J, Salazar-Marioni S, Jagolino-Cole A, Bahrainian M, Channa R, Sheth SA, Giancardo L. Synthetic OCT-A blood vessel maps using fundus images and generative adversarial networks. Sci Rep 2023;13:15325. https://doi.org/10.1038/s41598-023-42062-9. Funding This work is supported by the Translational Research Institute for Space Health through NASA Cooperative Agreement NNX16AO69A. Research Team and Acknowledgements Here are the people behind this data acquisition effort: Ivan Coronado, Samiksha Pachade, Rania Abdelkhaleq, Juntao Yan, Sergio Salazar-Marioni, Amanda Jagolino, Mozhdeh Bahrainian, Roomasa Channa, Sunil Sheth, Luca Giancardo We would also like to acknowledge for their support: the Institute for Stroke and Cerebrovascular Diseases at UTHealth, the VAMPIRE team at University of Dundee, UK and Memorial Hermann Hospital System. References Coronado I, Pachade S, Trucco E, Abdelkhaleq R, Yan J, Salazar-Marioni S, Jagolino-Cole A, Bahrainian M, Channa R, Sheth SA, Giancardo L. Synthetic OCT-A blood vessel maps using fundus images and generative adversarial networks. Sci Rep 2023;13:15325. https://doi.org/10.1038/s41598-023-42062-9. C. Guo, M. Szemenyei, Y. Yi, W. Wang, B. Chen, and C. Fan, "SA-UNet: Spatial Attention U-Net for Retinal Vessel Segmentation," in 2020 25th International Conference on Pattern Recognition (ICPR), Jan. 2021, pp. 1236–1242. doi: 10.1109/ICPR48806.2021.9413346. L. Li, M. Verma, Y. Nakashima, H. Nagahara, and R. Kawasaki, "IterNet: Retinal Image Segmentation Utilizing Structural Redundancy in Vessel Networks," 2020 IEEE Winter Conf. Appl. Comput. Vis. WACV, 2020, doi: 10.1109/WACV45572.2020.9093621. Y. Ma et al., "ROSE: A Retinal OCT-Angiography Vessel Segmentation Dataset and New Model," IEEE Trans. Med. Imaging, vol. 40, no. 3, pp. 928–939, Mar. 2021, doi: 10.1109/TMI.2020.3042802.
Objective: To investigate the performance of deep learning (DL) based on fully convolutional neural network (FCNN) in segmenting brain tissues in a large cohort of multiple sclerosis (MS) patients. Methods: We developed a FCNN model to segment brain tissues, including T2-hyperintense MS lesions. The training, validation, and testing of FCNN were based on ~1000 magnetic resonance imaging (MRI) datasets acquired on relapsing–remitting MS patients, as a part of a phase 3 randomized clinical trial. Multimodal MRI data (dual-echo, FLAIR, and T1-weighted images) served as input to the network. Expert validated segmentation was used as the target for training the FCNN. We cross-validated our results using the leave-one-center-out approach. Results: We observed a high average (95% confidence limits) Dice similarity coefficient for all the segmented tissues: 0.95 (0.92–0.98) for white matter, 0.96 (0.93–0.98) for gray matter, 0.99 (0.98–0.99) for cerebrospinal fluid, and 0.82 (0.63–1.0) for T2 lesions. High correlations between the DL segmented tissue volumes and ground truth were observed ( R 2 > 0.92 for all tissues). The cross validation showed consistent results across the centers for all tissues. Conclusion: The results from this large-scale study suggest that deep FCNN can automatically segment MS brain tissues, including lesions, with high accuracy.
Introduction Vessel segmentation in fundus images is essential in the diagnosis and prognosis of retinal diseases and the identification of image-based biomarkers. However, creating a vessel segmentation map can be a tedious and time consuming process, requiring careful delineation of the vasculature, which is especially hard for microcapillary plexi in fundus images. Optical coherence tomography angiography (OCT-A) is a relatively novel modality visualizing blood flow and microcapillary plexi not clearly observed in fundus photography. Unfortunately, current commercial OCT-A cameras have various limitations due to their complex optics making them more expensive, less portable, and with a reduced field of view (FOV) compared to fundus cameras. Moreover, the vast majority of population health data collection efforts do not include OCT-A data. We believe that strategies able to map fundus images to en-face OCT-A can create precise vascular vessel segmentation with less effort. In this dataset, called UTHealth - Fundus and Synthetic OCT-A Dataset (UT-FSOCTA), we include fundus images and en-face OCT-A images for 112 subjects. The two modalities have been manually aligned to allow for training of medical imaging machine learning pipelines. This dataset is accompanied by a manuscript that describes an approach to generate fundus vessel segmentations using OCT-A for training (Coronado et al., 2022). We refer to this approach as "Synthetic OCT-A". Fundus Imaging We include 45 degree macula centered fundus images that cover both macula and optic disc. All images were acquired using a OptoVue iVue fundus camera without pupil dilation. The full images are available at the fov45/fundus directory. In addition, we extracted the FOVs corresponding to the en-face OCT-A images collected in cropped/fundus/disc and cropped/fundus/macula. Enface OCT-A We include the en-face OCT-A images of the superficial capillary plexus. All images were acquired using an OptoVue Avanti OCT camera with OCT-A reconstruction software (AngioVue). Low quality images with errors in the retina layer segmentations were not included. En-face OCTA images are located in cropped/octa/disc and cropped/octa/macula. In addition, we include a denoised version of these images where only vessels are included. This has been performed automatically using the ROSE algorithm (Ma et al. 2021). These can be found in cropped/GT_OCT_net/noThresh and cropped/GT_OCT_net/Thresh, the former contains the probabilities of the ROSE algorithm the latter a binary map. Synthetic OCT-A We train a custom conditional generative adversarial network (cGAN) to map a fundus image to an en face OCT-A image. Our model consists of a generator synthesizing en face OCT-A images from corresponding areas in fundus photographs and a discriminator judging the resemblance of the synthesized images to the real en face OCT-A samples. This allows us to avoid the use of manual vessel segmentation maps altogether. The full images are available at the fov45/synthetic_octa directory. Then, we extracted the FOVs corresponding to the en-face OCT-A images collected in cropped/synthetic_octa/disc and cropped/synthetic_octa/macula. In addition, we performed the same denoising ROSE algorithm (Ma et al. 2021) used for the original enface OCT-A images, the results are available in cropped/denoised_synthetic_octa/noThresh and cropped/denoised_synthetic_octa/Thresh, the former contains the probabilities of the ROSE algorithm the latter a binary map. Other Fundus Vessel Segmentations Included In this dataset, we have also included the output of two recent vessel segmentation algorithms trained on external datasets with manual vessel segmentations. SA-Unet (Li et. al, 2020) and IterNet (Guo et. al, 2021). SA-Unet. The full images are available at the fov45/SA_Unet directory. Then, we extracted the FOVs corresponding to the en-face OCT-A images collected in cropped/SA_Unet/disc and cropped/SA_Unet/macula. IterNet. The full images are available at the fov45/Iternet directory. Then, we extracted the FOVs corresponding to the en-face OCT-A images collected in cropped/Iternet/disc and cropped/Iternet/macula. Train/Validation/Test Replication In order to replicate or compare your model to the results of our paper, we report below the data split used. Training subjects IDs: 1 - 25 Validation subjects IDs: 26 - 30 Testing subjects IDs: 31 - 112 Data Acquisition This dataset was acquired at the Texas Medical Center - Memorial Hermann Hospital in accordance with the guidelines from the Helsinki Declaration and it was approved by the UTHealth IRB with protocol HSC-MS-19-0352. User Agreement The UT-FSOCTA dataset is free to use for non-commercial scientific research only. In case of any publication the following paper needs to be cited Coronado I, Pachade S, Trucco E, Abdelkhaleq R, Yan J, Salazar-Marioni S, Jagolino-Cole A, Bahrainian M, Channa R, Sheth SA, Giancardo L. Synthetic OCT-A blood vessel maps using fundus images and generative adversarial networks. Sci Rep 2023;13:15325. https://doi.org/10.1038/s41598-023-42062-9. Funding This work is supported by the Translational Research Institute for Space Health through NASA Cooperative Agreement NNX16AO69A. Research Team and Acknowledgements Here are the people behind this data acquisition effort: Ivan Coronado, Samiksha Pachade, Rania Abdelkhaleq, Juntao Yan, Sergio Salazar-Marioni, Amanda Jagolino, Mozhdeh Bahrainian, Roomasa Channa, Sunil Sheth, Luca Giancardo We would also like to acknowledge for their support: the Institute for Stroke and Cerebrovascular Diseases at UTHealth, the VAMPIRE team at University of Dundee, UK and Memorial Hermann Hospital System. References Coronado I, Pachade S, Trucco E, Abdelkhaleq R, Yan J, Salazar-Marioni S, Jagolino-Cole A, Bahrainian M, Channa R, Sheth SA, Giancardo L. Synthetic OCT-A blood vessel maps using fundus images and generative adversarial networks. Sci Rep 2023;13:15325. https://doi.org/10.1038/s41598-023-42062-9. C. Guo, M. Szemenyei, Y. Yi, W. Wang, B. Chen, and C. Fan, "SA-UNet: Spatial Attention U-Net for Retinal Vessel Segmentation," in 2020 25th International Conference on Pattern Recognition (ICPR), Jan. 2021, pp. 1236–1242. doi: 10.1109/ICPR48806.2021.9413346. L. Li, M. Verma, Y. Nakashima, H. Nagahara, and R. Kawasaki, "IterNet: Retinal Image Segmentation Utilizing Structural Redundancy in Vessel Networks," 2020 IEEE Winter Conf. Appl. Comput. Vis. WACV, 2020, doi: 10.1109/WACV45572.2020.9093621. Y. Ma et al., "ROSE: A Retinal OCT-Angiography Vessel Segmentation Dataset and New Model," IEEE Trans. Med. Imaging, vol. 40, no. 3, pp. 928–939, Mar. 2021, doi: 10.1109/TMI.2020.3042802.