Optical diffraction tomography measures the three-dimensional refractive index map of a specimen and visualizes biochemical phenomena at the nanoscale in a non-destructive manner. One major drawback of optical diffraction tomography is poor axial resolution due to limited access to the three-dimensional optical transfer function. This missing cone problem has been addressed through regularization algorithms that use a priori information, such as non-negativity and sample smoothness. However, the iterative nature of these algorithms and their parameter dependency make real-time visualization impossible. In this article, we propose and experimentally demonstrate a deep neural network, which we term DeepRegularizer, that rapidly improves the resolution of a three-dimensional refractive index map. Trained with pairs of datasets (a raw refractive index tomogram and a resolution-enhanced refractive index tomogram via the iterative total variation algorithm), the three-dimensional U-net-based convolutional neural network learns a transformation between the two tomogram domains. The feasibility and generalizability of our network are demonstrated using bacterial cells and a human leukaemic cell line, and by validating the model across different samples. DeepRegularizer offers more than an order of magnitude faster regularization performance compared to the conventional iterative method. We envision that the proposed data-driven approach can bypass the high time complexity of various image reconstructions in other imaging modalities.
The aim of this study was to develop a deep learning model for classifying frames with versus without optical coherence tomography (OCT)-derived thin-cap fibroatheroma (TCFA).A total of 602 coronary lesions from 602 angina patients were randomised into training and test sets in a 4:1 ratio. A DenseNet model was developed to classify OCT frames with or without OCT-derived TCFA. Gradient-weighted class activation mapping was used to visualise the area of attention. In the training sample (35,678 frames of 480 lesions), the model with fivefold cross-validation had an overall accuracy of 91.6±1.7%, sensitivity of 88.7±3.4%, and specificity of 91.8±2.0% (averaged AUC=0.96±0.01) in predicting the presence of TCFA. In the test samples (9,722 frames of 122 lesions), the overall accuracy at the frame level was 92.8% within the lesion (AUC=0.96) and 91.3% in the entire OCT pullback. The correlation between the %TCFA burden per vessel predicted by the model compared with that identified by experts was significant (r=0.87, p<0.001). The region of attention was localised at the site of the thin cap in 93.4% of TCFA-containing frames. Total computational time per pullback was 2.1±0.3 seconds.A deep learning algorithm can accurately detect an OCT-TCFA with high reproducibility. The time-saving computerised process may assist clinicians to recognise high-risk lesions easily and to make decisions in the catheterisation laboratory.
Optical diffraction tomography measures the three-dimensional refractive index map of a specimen and visualizes biochemical phenomena at the nanoscale in a non-destructive manner. One major drawback of optical diffraction tomography is poor axial resolution due to limited access to the three-dimensional optical transfer function. This missing cone problem has been addressed through regularization algorithms that use a priori information, such as non-negativity and sample smoothness. However, the iterative nature of these algorithms and their parameter dependency make real-time visualization impossible. In this article, we propose and experimentally demonstrate a deep neural network, which we term DeepRegularizer, that rapidly improves the resolution of a three-dimensional refractive index map. Trained with pairs of datasets (a raw refractive index tomogram and a resolution-enhanced refractive index tomogram via the iterative total variation algorithm), the three-dimensional U-net-based convolutional neural network learns a transformation between the two tomogram domains. The feasibility and generalizability of our network are demonstrated using bacterial cells and a human leukaemic cell line, and by validating the model across different samples. DeepRegularizer offers more than an order of magnitude faster regularization performance compared to the conventional iterative method. We envision that the proposed data-driven approach can bypass the high time complexity of various image reconstructions in other imaging modalities.
Motivated by the observation that content transformations tend to preserve the semantic information conveyed by video clips, this paper introduces a novel technique for near-duplicate video clip (NDVC) detection, leveraging model-free semantic concept detection and adaptive semantic distance measurement. In particular, model-free semantic concept detection is realized by taking advantage of the collective knowledge in an image folksonomy (which is an unstructured collection of user-contributed images and tags), facilitating the use of an unrestricted concept vocabulary. Adaptive semantic distance measurement is realized by means of the signature quadratic form distance (SQFD), making it possible to flexibly measure the similarity between video shots that contain a varying number of semantic concepts, and where these semantic concepts may also differ in terms of relevance and nature. Experimental results obtained for the MIRFLICKR-25000 image set (used as a source of collective knowledge) and the TRECVID 2009 video set (used to create query and reference video clips) demonstrate that model-free semantic concept detection and SQFD can be successfully used for the purpose of identifying NDVCs.
Optical diffraction tomography measures the three-dimensional refractive index map of a specimen and visualizes biochemical phenomena at the nanoscale in a non-destructive manner. One major drawback of optical diffraction tomography is poor axial resolution due to limited access to the three-dimensional optical transfer function. This missing cone problem has been addressed through regularization algorithms that use a priori information, such as non-negativity and sample smoothness. However, the iterative nature of these algorithms and their parameter dependency make real-time visualization impossible. In this article, we propose and experimentally demonstrate a deep neural network, which we term DeepRegularizer, that rapidly improves the resolution of a three-dimensional refractive index map. Trained with pairs of datasets (a raw refractive index tomogram and a resolution-enhanced refractive index tomogram via the iterative total variation algorithm), the three-dimensional U-net-based convolutional neural network learns a transformation between the two tomogram domains. The feasibility and generalizability of our network are demonstrated using bacterial cells and a human leukaemic cell line, and by validating the model across different samples. DeepRegularizer offers more than an order of magnitude faster regularization performance compared to the conventional iterative method. We envision that the proposed data-driven approach can bypass the high time complexity of various image reconstructions in other imaging modalities.
Objective and Impact Statement. We propose a rapid and accurate blood cell identification method exploiting deep learning and label-free refractive index (RI) tomography. Our computational approach that fully utilizes tomographic information of bone marrow (BM) white blood cell (WBC) enables us to not only classify the blood cells with deep learning but also quantitatively study their morphological and biochemical properties for hematology research. Introduction. Conventional methods for examining blood cells, such as blood smear analysis by medical professionals and fluorescence-activated cell sorting, require significant time, costs, and domain knowledge that could affect test results. While label-free imaging techniques that use a specimen's intrinsic contrast (e.g., multiphoton and Raman microscopy) have been used to characterize blood cells, their imaging procedures and instrumentations are relatively time-consuming and complex. Methods. The RI tomograms of the BM WBCs are acquired via Mach-Zehnder interferometer-based tomographic microscope and classified by a 3D convolutional neural network. We test our deep learning classifier for the four types of bone marrow WBC collected from healthy donors (n=10): monocyte, myelocyte, B lymphocyte, and T lymphocyte. The quantitative parameters of WBC are directly obtained from the tomograms. Results. Our results show >99% accuracy for the binary classification of myeloids and lymphoids and >96% accuracy for the four-type classification of B and T lymphocytes, monocyte, and myelocytes. The feature learning capability of our approach is visualized via an unsupervised dimension reduction technique. Conclusion. We envision that the proposed cell classification framework can be easily integrated into existing blood cell investigation workflows, providing cost-effective and rapid diagnosis for hematologic malignancy.
Recent advances in quantitative phase imaging (QPI) and artificial intelligence (AI) have opened up the possibility of an exciting frontier. The fast and label-free nature of QPI enables the rapid generation of large-scale and uniform-quality imaging data in two, three, and four dimensions. Subsequently, the AI-assisted interrogation of QPI data using data-driven machine learning techniques results in a variety of biomedical applications. Also, machine learning enhances QPI itself. Herein, we review the synergy between QPI and machine learning with a particular focus on deep learning. Furthermore, we provide practical guidelines and perspectives for further development.
Abstract Sepsis is a dysregulated immune response to infection that leads to organ dysfunction and is associated with a high incidence and mortality rate. The lack of reliable biomarkers for diagnosing and prognosis of sepsis is a major challenge in its management. We aimed to investigate the potential of three-dimensional label-free CD8 + T cell morphology as a biomarker for sepsis. This study included three-time points in the sepsis recovery cohort ( N = 8) and healthy controls ( N = 20). Morphological features and spatial distribution within cells were compared among the patients’ statuses. We developed a deep learning model to predict the diagnosis and prognosis of sepsis using the internal cell morphology. Correlation between the morphological features and clinical indices were analysed. Cell morphological features and spatial distribution differed significantly between patients with sepsis and healthy controls and between the survival and non-survival groups. The model for predicting the diagnosis and prognosis of sepsis showed an area under the receiver operating characteristic curve of nearly 100% with only a few cells, and a strong correlation between the morphological features and clinical indices was observed. Our study highlights the potential of three-dimensional label-free CD8 + T cell morphology as a promising biomarker for sepsis. This approach is rapid, requires a minimum amount of blood samples, and has the potential to provide valuable information for the early diagnosis and prognosis of sepsis.