Radiopharmaceutical therapy (RPT) is an emerging prostate cancer treatment that delivers radiation to specific molecules within the tumor microenvironment (TME), causing DNA damage and cell death. Given TME heterogeneity, it's crucial to explore RPT dosimetry and biological impacts at the cellular level. We integrated spatial transcriptomics (ST) with computational modeling to investigate the effects of RPT targeting prostate-specific membrane antigen (PSMA), fibroblast activation protein (FAP), and gastrin-releasing peptide receptor (GRPR) each labelled with beta-emitting lutetium-177 (
Spatially resolved transcriptomics (ST) has revolutionized the field of biology by providing a powerful tool for analyzing gene expression in situ. However, current ST methods, particularly barcode-based methods, have limitations in reconstructing high-resolution images from barcodes sparsely distributed in slides. Here, we present SuperST, an algorithm that enables the reconstruction of dense matrices (higher-resolution and non-zero-inflated matrices) from low-resolution ST libraries. SuperST is based on deep image prior, which reconstructs spatial gene expression patterns as image matrices. Compared with previous methods, SuperST generated output images that more closely resembled immunofluorescence images for given gene expression maps. Furthermore, we demonstrated how one can combine images created by SuperST with computer vision algorithms. In this context, we proposed a method for extracting features from the images, which can aid in spatial clustering of genes. By providing a dense matrix for each gene in situ, SuperST can successfully address the resolution and zero-inflation issue.
Abstract Background The current diagnostic criteria for temporomandibular disorders (TMD) do not require imaging for the diagnosis of degenerative joint disease (DJD) of the temporomandibular joint (TMJ) condyle, and there is a lack of data investigating the effectiveness of imaging modalities in predicting long‐term TMJ DJD prognosis. Objectives To verify the association between initial bone scintigraphy results and long‐term DJD bone changes occurring in the TMJ condyle on cone beam computed tomography (CBCT). Methods Initial bone scintigraphy, panoramic radiography and CBCT results were analysed in relation to long‐term (12 months) TMJ DJD bone change on CBCTs in 55 TMD patients (110 joints). Clinical and radiographic indices were statistically analysed among three groups (improved, no change, and worsened) based on long‐term TMJ DJD prognosis calculated by destructive change index (DCI). Results Neither the uptake ratio nor visual assessment results from initial bone scintigraphy showed a significant difference according to long‐term condylar bone change groups. The cut‐off value of bone scintigraphy uptake ratio was 2.53 for long‐term worsening of TMJ DJD. Worsening of TMJ DJD was significantly associated with the diagnosis based on panoramic radiography ( p = .011) and CBCT ( p < .001). Initial DCI ( β = −.291, p = .046) had a significant association with long‐term worsening of TMJ DJD. Conclusion Initial bone scintigraphy results did not show sufficiently close associations with long‐term TMJ DJD prognosis. This should be considered in the selection process of imaging modalities for TMJ DJD patients. Future studies are needed to develop prognostic indices that comprise both clinical and imaging contents for improved predictive ability.
1408 Introduction: Basal/acetazolamide brain perfusion SPECT has been used routinely to evaluate functional hemodynamics in patients with carotid artery stenosis. To detect any decrease on vascular perfusion, nuclear medicine physician rely principally on visual analysis. However, optimizing perceptual expertise on basal/acetazolamide brain perfusion SPECT require time and experience. Recently, a 3D CNN-based interpretation model for brain perfusion SPECT images has been developed. This research aim to compare the diagnosis accuracy of interpreting brain perfusion SPECT images with and without 3D CNN-based score.
Materials and Methods: One hundred and five cases (43.2 ± 12.3 years) with basal/acetazolamide brain perfusion SPECT were retrospectively collected. Perfusion score were generated from 3D CNN model for each image. Each of image read by 2 nuclear medicine physician novice to brain perfusion SPECT (experience less than 7 months) with and without score from 3D CNN model for each vessel territory. Agreement between novice readers and expert reading for each vessel territory were examined.
Results: Analysis between novice reader 1 and expert readers showed improvement with 3D CNN-based score from slight into strong agreement for Basal R-ACA, L-ACA,R-ICA, and Diamox R-ACA (K= 0.08 to 0.873, 0.194 vs 0.817, 0.161 vs 0.912, 0.161 vs 0.912), Fair to strong agreement for Basal R-MCA, L-ICA and Diamox L-ACA (K=0.264 vs 0.887, 0.342 vs 0.724, 0.256 vs 0.914), moderate to strong agreement for Diamox R-MCA, L-MCA and L-ICA (K=0.457 vs 0.955,0.382 vs 0.921, 0.559vs. 0.954) and slight to strong agreement for L-MCA (K = 0.193 vs 0.554). Analysis between novice reader 2 and expert readers showed improvement with 3D CNN based score form moderate to strong agreement for all of vessel region (K=0.512 - 0.734 vs 0.923 - 0.912).
Conclusions: 3D CNN-based scores may aid physicians to detect the abnormality of basal perfusion and vascular reserve per artery territories especially physicians who have insufficient experience in reading brain perfusion SPECT.
Abstract Since many single-cell RNA-seq (scRNA-seq) data are obtained after cell sorting, such as when investigating immune cells, tracking cellular landscape by integrating single-cell data with spatial transcriptomic data is limited due to cell type and cell composition mismatch between the two datasets. We developed a method, spSeudoMap, which utilizes sorted scRNA-seq data to create virtual cell mixtures that closely mimic the gene expression of spatial data and trains a domain adaptation model for predicting spatial cell compositions. The method was applied in brain and breast cancer tissues and accurately predicted the topography of cell subpopulations. spSeudoMap may help clarify the roles of a few, but crucial cell types.
Abstract Quantitative SPECT/CT is potentially useful for more accurate and reliable measurement of glomerular filtration rate (GFR) than conventional planar scintigraphy. However, manual drawing of a volume of interest (VOI) on renal parenchyma in CT images is a labor-intensive and time-consuming task. The aim of this study is to develop a fully automated GFR quantification method based on a deep learning approach to the 3D segmentation of kidney parenchyma in CT. We automatically segmented the kidneys in CT images using the proposed method with remarkably high Dice similarity coefficient relative to the manual segmentation (mean = 0.89). The GFR values derived using manual and automatic segmentation methods were strongly correlated (R2 = 0.96). The absolute difference between the individual GFR values using manual and automatic methods was only 2.90%. Moreover, the two segmentation methods had comparable performance in the urolithiasis patients and kidney donors. Furthermore, both segmentation modalities showed significantly decreased individual GFR in symptomatic kidneys compared with the normal or asymptomatic kidney groups. The proposed approach enables fast and accurate GFR measurement.
26 Objectives: Glomerular filtration rate (GFR), the rate at which the kidney filters the waste from the blood, is considered the most useful test to measure the level of renal function and determine the stage of kidney disease. Quantitative SPECT/CT is potentially useful for more accurate and reliable GFR measurement than conventional planar scintigraphy [1]. However, manual drawing of a volume of interest (VOI) on renal parenchyma in CT images is labor-intensive and time-consuming job usually taking around 15 min per scan. The aim of this study is to develop fully automated GFR quantification method based on deep learning approach to the three-dimensional (3D) segmentation of kidney parenchyma in CT. Methods: Two hundred and ninety (290) patients underwent quantitative 99mTc-DTPA SPECT/CT (GE Discovery NM/CT 670) scans. One-min SPECT data were acquired in a continuous mode 2 min after the intravenous injection of 370 MBq 99mTc-DTPA. The SPECT images were corrected for attenuation, scatter, and collimator-detector response, and cross-calibrated with a dose calibrator. A nuclear medicine physician drew 2D region of interest (ROI) on renal parenchyma in every 80 to 100 coronal CT slices using vendor’s Q. Metrix software, which provides automatic ROI interpolation between the slices. To reduce the discontinuity in 3D space caused by the 2D ROI drawing, we applied 3D volume smoothing and morphological operations. We used modified 3D U-Net that consists of the contraction and expansion paths and learns an end-to-end mapping between CT and renal parenchyma segmented volumes. Each path has 4 sequential layers composed of a convolution with 3 × 3 × 3 kernels, ReLU (Rectified Linear Unit) and pooling layers (1 × 1 × 1 kernels, Sigmoid for last layer of expansion path). Each layer is updated using the error back-propagation with Adam (adaptive moment estimation) optimizer. Symmetric skip connections between convolutional and up-convolutional layers are used. The U-Net was trained using 240 randomly selected datasets and validated using 50 datasets. Before the training, CT images were down-sampled to 200 × 200 × 160 (2.5 mm3) and cropped into a 144 × 86 × 70 matrix to reduce the input data size. For the quantitative performance evaluation, Dice similarity coefficient between manual drawing and deep learning output was calculated. We calculate % injected dose (%ID) by applying the deep learning output (3D VOI) to the quantitative SPECT images. We also assessed the correlation between the GFR measurements (%ID × 9.1462 + 23.0653) using both segmentation methods [1]. To confirm the consistency of performance, we performed five-fold cross-validation. Results: We could automatically segment the kidneys in CT images using the proposed method with remarkably high Dice similarity coefficient relative to the manual segmentation (mean ± SD = 0.82 ± 0.054). Although manual segmentation resulted in the discontinuity between slices and vendor’s program sometimes offered wrong ROI interpolation results, the proposed deep learning approach provided 3D kidney parenchyma VOI with no such discontinuity between slices. The GFR values derived using manual and automatic segmentation methods were strongly correlated (R2 = 0.94). The absolute difference between the GFR values using manual (45.40±7.48 ml/min) and automatic (46.05±7.32 ml/min) methods was only 3.30±2.89% (left kidney: 3.70±3.16%, right: 3.58±3.18%) in the first cross-validation. The absolute difference obtained in the other cross-validations were 5.75±4.91%, 5.29±3.78%, 3.08±2.81% and 3.41±3.55%, respectively. Conclusion: The proposed deep learning approach to the 3D segmentation of kidney parenchyma in CT enables fast and accurate GFR measurement. Accordingly, this method will be useful for facilitating the GFR measurement using the quantitative SPECT/CT technology.
Abstract Background: We aimed to evaluate the reliability and feasibility of visual grading systems and various quantitative indexes of [ 99m Tc]Tc-DPD imaging for cardiac amyloidosis (CA). Methods: Patients who underwent [ 99m Tc]Tc-DPD imaging with suspicion of CA were enrolled. On the planar image, myocardial uptake was visually graded using Perugini’s and Dorbala’s methods (PS and DS). As [ 99m Tc]Tc-DPD indexes, heart-to-whole body ratio (H/WB) and heart-to-contralateral lung ratio (H/CL) were measured on planar image. SUVmax, SUVmean, total myocardial uptake (TMU), and C-index were measured on SPECT/CT. Inter-observer agreement of the indexes and their association with visual grading and clinical factors were evaluated. Results: A total of 152 [ 99m Tc]Tc-DPD images, of which 18 were positive, were analyzed. Inter-observer agreement was high for both DS (k = 0.95) and PS (k = 0.96). However, DS showed a higher correlation with quantitative indexes than PS. Inter-observer agreement was also high for SPECT/CT indexes, particularly SUVmax. SUVmax was significantly different between different DS groups ( P = 0.014–0.036), and showed excellent correlations with H/WB and H/CL ( r = 0.898 and 0.910). SUVmax also showed significant differences between normal, AL, and ATTR pathology ( P = 0.022–0.037), and a significant correlation with extracellular volume on cardiac MRI ( r = 0.772, P < 0.001). Conclusions: DS is a visual grading system for CA that is more significantly matched with quantitative indexes than PS. SUVmax is a reliable quantitative index on SPECT/CT, with a high inter-observer agreement, correlations with visual grade and cardiac MRI findings.
Abstract Profiling molecular features associated with the morphological landscape of tissue is crucial for investigating the structural and spatial patterns that underlie the biological function of tissues. In this study, we present a new method, spatial gene expression patterns by deep learning of tissue images (SPADE), to identify important genes associated with morphological contexts by combining spatial transcriptomic data with coregistered images. SPADE incorporates deep learning-derived image patterns with spatially resolved gene expression data to extract morphological context markers. Morphological features that correspond to spatial maps of the transcriptome were extracted by image patches surrounding each spot and were subsequently represented by image latent features. The molecular profiles correlated with the image latent features were identified. The extracted genes could be further analyzed to discover functional terms and exploited to extract clusters maintaining morphological contexts. We apply our approach to spatial transcriptomic data from different tissues, platforms and types of images to demonstrate an unbiased method that is capable of obtaining image-integrated gene expression trends.