We proposed a composite image guided filtering technique for dynamic PET denoising to enable quantitatively enhanced time frames. The guided filter computes the filtering output by considering the content of a guidance image, which can be the input image itself or a different image. In this paper, the composite image from the entire time series is considered as the guidance image. Thus, a local linear model is established between the composite image and individual PET time frames. Subsequently, linear ridge regression is exploited to derive an explicit composite image guided filter. For validation, 20 minute FDG PET data from a NEMA NU 4-2008 IQ phantom were acquired in the list-mode format via the Siemens Invoen micro PET, and were subsequently divided and reconstructed into 20 frames. We compared the performances (including visual and quantitative profiles) of the proposed composite image guide filter (CIGF) with a classic Gaussian filter (GF), and a highly constrained back projection (HYPR) filter. The experimental results demonstrated the proposed filter to achieve superior visual and quantitative performance without sacrificing spatial resolution. The proposed CIGF is considerably effective and has great potential to process the data with high noise for dynamic PET scans.
Dynamic positron emission tomography (PET) imaging is a powerful tool that provides useful quantitative information on physiological and biochemical processes. However, low signal-to-noise ratio in short dynamic frames makes accurate kinetic parameter estimation from noisy voxel-wise time activity curves (TAC) a challenging task. To address this problem, several spatial filters have been investigated to reduce the noise of each frame with noticeable gains. These filters include the Gaussian filter, bilateral filter, and wavelet-based filter. These filters usually consider only the local properties of each frame without exploring potential kinetic information from entire frames. Thus, in this work, to improve PET parametric imaging accuracy, we present a kinetics-induced bilateral filter (KIBF) to reduce the noise of dynamic image frames by incorporating the similarity between the voxel-wise TACs using the framework of bilateral filter. The aim of the proposed KIBF algorithm is to reduce the noise in homogeneous areas while preserving the distinct kinetics of regions of interest. Experimental results on digital brain phantom and in vivo rat study with typical 18F-FDG kinetics have shown that the present KIBF algorithm can achieve notable gains over other existing algorithms in terms of quantitative accuracy measures and visual inspection.
Multienergy computed tomography (MECT) has the potential to simultaneously offer multiple sets of energy- selective data belonging to specific energy windows. However, because sufficient photon counts are not available in the specific energy windows compared with that in the whole energy window, the MECT images reconstructed by the analytical approach often suffer from poor signal-to-noise (SNR) and strong streak artifacts. To eliminate this drawback, in this work we present a penalized weighted least-squares (PWLS) scheme by incorporating the new concept of structure tensor total variation (STV) regularization to improve the MECT images quality from low-milliampere-seconds (low-mAs) data acquisitions. Henceforth the present scheme is referred to as `PWLS- STV' for simplicity. Specifically, the STV regularization is derived by penalizing the eigenvalues of the structure tensor of every point in the MECT images. Thus it can provide more robust measures of image variation, which can eliminate the patchy artifacts often observed in total variation regularization. Subsequently, an alternating optimization algorithm was adopted to minimize the objective function. Experiments with a digital XCAT phantom clearly demonstrate that the present PWLS-STV algorithm can achieve more gains than the existing TV-based algorithms and the conventional filtered backpeojection (FBP) algorithm in terms of noise-induced artifacts suppression, resolution preservation, and material decomposition assessment.
Accurate reorientation and segmentation of the left ventricular (LV) is essential for the quantitative analysis of myocardial perfusion imaging (MPI). This study proposes an end-to-end model, named as Multi-Scale Spatial Transformer UNet (MS-ST-UNet), which involves the multi-scale spatial transformer network (MSSTN) and multi-scale UNet (MSUNet) modules to perform simultaneous reorientation and segmentation of LV region from nuclear cardiac images. The multi-scale sampler produces images with varying resolutions, while scale transformer (ST) blocks are employed to align the scales of features. The proposed method is trained and tested using two different nuclear cardiac image modalities: 13N-ammonia Positron Emission Tomography (PET) and 99mTc-sestamibi Single Photon Emission Computed Tomography (SPECT). MS-ST-UNet attains Dice Similarity Coefficient (DSC) scores of 91.48% and 94.81% for PET LV myocardium (LV-MY) and SPECT LV-MY, respectively. Additionally, the mean square error (MSE) between predicted rigid registration parameters and ground truth decreases to below 1.4×10-2. The experimental findings indicate that the MS-ST-UNet yields notably reduced registration errors and more precise boundary detection for the LV structure compared to existing methods. This joint learning framework promotes mutual enhancement between reorientation and segmentation tasks, leading to cutting edge performance and an efficient image processing workflow.
We proposed a maximum a posterior (MAP) framework for incorporating information from co-registered anatomical images into PET image reconstruction through a novel anato-functional joint prior. The characteristic of the utilized hyperbolic potential function is determinate by the voxel intensity differences within the anatomical image, while the penalization is computed based on voxel intensity differences in reconstructed PET images. Using realistic simulated short time 18 FDG PET scan data, we optimized the performance of the proposed MAP reconstruction with the joint prior (JP-MAP), and compared its performance with conventional 3D maximum likelihood expectation maximization (MLEM) and MAP reconstructions. The proposed JP-MAP reconstruction algorithm resulted in quantitatively enhanced reconstructed images, as demonstrated in extensive 18 FDG PET simulation study.
This paper simulate to evaluate the performance of a magnetic compatible small animal PET scanner prototype in GATE (Geant4 Application for Tomographic Emission) platform. The scanner consist of total 60 detectors arranged in 5 contiguous rings. Each detector has a lutetiumyttriumoxyorthsilicate (LYSO) scintillator array read out by a SiPMs array. The scintillator consist of 13×13 crystals of size 1.8×1.8×15 mm3. The diameter of crystals ring is 102 mm and the whole scanner axial extent of 125.4 mm, proving the maximal acceptance angle of 50.8 degrees. The spatial resolution, sensitivity, scatter fraction and the image quality performance of this scanner were assessed following the NEMA NU4 standard using the data obtained from GATE. In simulation, the energy resolution was set to 26% while the energy windows and coincidence time windows was 300-650 keV and 3.75 ns respectively. The result presents that the transverse spatial resolution better than 2 mm in the center region of FOV. The sensitivity up to 20.97% and the NECR peak value reaches 2,256 kcps with the scatter fraction of 20.8%. Much more, the percentage standard deviation of the uniform region is 12.8% while the spill over ration of air and water region is 14.77% and 19.19% separately.